Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

storage: role doesn't support 78 characters' length for volume name #84

Closed
yizhanglinux opened this issue Apr 15, 2020 · 12 comments · Fixed by #174
Closed

storage: role doesn't support 78 characters' length for volume name #84

yizhanglinux opened this issue Apr 15, 2020 · 12 comments · Fixed by #174

Comments

@yizhanglinux
Copy link
Collaborator

yizhanglinux commented Apr 15, 2020

playbook:

  • hosts: all
    become: true
    vars:
    volume_group_size: '10g'
    volume_size: '80g'
    storage_safe_mode: false

    tasks:

    • include_role:
      name: storage

    • include_tasks: get_unused_disk.yml
      vars:
      min_size: "{{ volume_group_size }}"
      max_return: 2

    • name: Create three LVM logical volumes under one volume group
      include_role:
      name: storage
      vars:
      storage_pools:
      - name: foo1
      disks: ["{{ unused_disks[0] }}"]
      volumes:
      - name: abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz
      size: "{{ volume_size }}"
      mount_point: '/opt/test1'

    • name: Clean up
      include_role:
      name: storage
      vars:
      storage_pools:
      - name: foo1
      disks: ["{{ unused_disks[0] }}"]
      state: absent
      volumes: []

Execution log:

TASK [storage : Apply defaults to pools and volumes [6/6]] **************************************************************************************************************************************************

TASK [storage : debug] **************************************************************************************************************************************************************************************
ok: [localhost] => {
"_storage_pools": [
{
"disks": [
"sdd"
],
"name": "foo1",
"state": "present",
"type": "lvm",
"volumes": [
{
"fs_create_options": "",
"fs_label": "",
"fs_overwrite_existing": true,
"fs_type": "xfs",
"mount_check": 0,
"mount_device_identifier": "uuid",
"mount_options": "defaults",
"mount_passno": 0,
"mount_point": "/opt/test1",
"name": "abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz",
"pool": "foo1",
"size": "80g",
"state": "present",
"type": "lvm"
}
]
}
]
}

TASK [storage : debug] **************************************************************************************************************************************************************************************
ok: [localhost] => {
"_storage_volumes": []
}

TASK [storage : get required packages] **********************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"actions": [], "changed": false, "leaves": [], "mounts": [], "msg": "failed to set up volume 'abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz'", "packages": [], "pools": [], "volumes": []}

PLAY RECAP **************************************************************************************************************************************************************************************************
localhost : ok=33 changed=0 unreachable=0 failed=1 skipped=12 rescued=0 ignored=0

@yizhanglinux yizhanglinux changed the title storage role doesn't support 78 character's length for volume name storage role doesn't support 78 characters' length for volume name Apr 15, 2020
@richm
Copy link
Contributor

richm commented Apr 15, 2020

@yizhanglinux What platform are you running ansible-playbook on? What version of ansible? What platform are you trying to configure storage on? What version of the storage role are you using?

@richm
Copy link
Contributor

richm commented Apr 15, 2020

I have tried running the latest master storage role - ansible host is fedora 31 with ansible 2.9.6 - managed host fedora 31 and centos 7 - in both cases I get past TASK [storage : get required packages] - looks like some sort of blivet error - this task basically runs blivet to get blivet to tell what packages it needs to install https://github.com/linux-system-roles/storage/blob/master/tasks/main-blivet.yml#L90

- name: get required packages
  blivet:
    pools: "{{ _storage_pools }}"
    volumes: "{{ _storage_volumes }}"
    use_partitions: "{{ storage_use_partitions }}"
    disklabel_type: "{{ storage_disklabel_type }}"
    packages_only: true
  register: package_info

it should not be trying to set up a volume - with packages_only: true it should only return a list of packages

@pcahyna
Copy link
Member

pcahyna commented Apr 15, 2020

@richm that a problem with the "get required packages" in general, apparently it does too much, it even used to change mounts (a test for this is here: #47)

@yizhanglinux
Copy link
Collaborator Author

yizhanglinux commented Apr 16, 2020

@yizhanglinux What platform are you running ansible-playbook on? What version of ansible? What platform are you trying to configure storage on? What version of the storage role are you using?

Hi @richm
Here is my environment:
ansible: ansible 2.9.6
latest storage role
running on RHEL7.8 with localhost mode

The error is from: https://github.com/linux-system-roles/storage/blob/master/library/blivet.py#L339 , seems it's one limitation from Blivet.new_lv

@dwlehman
Copy link
Collaborator

If you have it handy, the /tmp/blivet.log would be helpful. Blivet does limit names for new devices to 96 characters, so your name should be acceptable. Are you sure the disk has space for an 80 GiB volume?

@yizhanglinux
Copy link
Collaborator Author

If you have it handy, the /tmp/blivet.log would be helpful. Blivet does limit names for new devices to 96 characters, so your name should be acceptable. Are you sure the disk has space for an 80 GiB volume?

I retest it on the same server with fresh installed RHEL-7.8, but cannot reproduce it now.
Will reopen it if I reproduce it in the future, close it.

@pcahyna
Copy link
Member

pcahyna commented May 8, 2020

@yizhanglinux can you please add a test for this, or modify an existing test (i.e. use a longer volume name in an existing test)?

@yizhanglinux
Copy link
Collaborator Author

@yizhanglinux can you please add a test for this, or modify an existing test (i.e. use a longer volume name in an existing test)?

Sure, will try it next week.

@yizhanglinux
Copy link
Collaborator Author

Reopen this issue as I reproduced this issue again.

playbook

$ cat tests/temp.yml 
---
- hosts: all
  become: true
  vars:
    volume_group_size: '10g'
    volume_size: '80g'
    storage_safe_mode: false

  tasks:
    - include_role:
        name: storage

    - include_tasks: get_unused_disk.yml
      vars:
        min_size: "{{ volume_group_size }}"
        max_return: 2

    - name: Create one logical volumes which has 4 char vg and 78 lv
      include_role:
        name: storage
      vars:
        storage_pools:
            - name: foo4
              disks: ["{{ unused_disks[0] }}"]
              volumes:
                - name: abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwx78
                  size: "{{ volume_size }}"
                  mount_point: '/opt/test1'

    - include_tasks: verify-role-results.yml

    - name: Clean up
      include_role:
        name: storage
      vars:
        storage_pools:
            - name: foo4
              disks: ["{{ unused_disks[0] }}"]
              state: absent
              volumes: []

$ ansible-playbook --flush-cache -i inventory tests/temp.yml -vvvv

---snip---
TASK [storage : debug] ***********************************************************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:84
ok: [localhost] => {
    "_storage_pools": [
        {
            "disks": [
                "nvme0n1"
            ],
            "name": "foo4",
            "state": "present",
            "type": "lvm",
            "volumes": [
                {
                    "fs_create_options": "",
                    "fs_label": "",
                    "fs_overwrite_existing": true,
                    "fs_type": "xfs",
                    "mount_check": 0,
                    "mount_device_identifier": "uuid",
                    "mount_options": "defaults",
                    "mount_passno": 0,
                    "mount_point": "/opt/test1",
                    "name": "abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwx78",
                    "pool": "foo4",
                    "size": "80g",
                    "state": "present",
                    "type": "lvm"
                }
            ]
        }
    ]
}

TASK [storage : debug] ***********************************************************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:87
ok: [localhost] => {
    "_storage_volumes": []
}

TASK [storage : get required packages] *******************************************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:90
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<localhost> EXEC /bin/sh -c 'echo ~root && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir /root/.ansible/tmp/ansible-tmp-1590074002.7148309-5365-17108202520595 && echo ansible-tmp-1590074002.7148309-5365-17108202520595="` echo /root/.ansible/tmp/ansible-tmp-1590074002.7148309-5365-17108202520595 `" ) && sleep 0'
Using module file /root/test/storage/library/blivet.py
<localhost> PUT /root/.ansible/tmp/ansible-local-48834074jvab/tmplwhox46g TO /root/.ansible/tmp/ansible-tmp-1590074002.7148309-5365-17108202520595/AnsiballZ_blivet.py
<localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1590074002.7148309-5365-17108202520595/ /root/.ansible/tmp/ansible-tmp-1590074002.7148309-5365-17108202520595/AnsiballZ_blivet.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1590074002.7148309-5365-17108202520595/AnsiballZ_blivet.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1590074002.7148309-5365-17108202520595/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
    "actions": [],
    "changed": false,
    "invocation": {
        "module_args": {
            "disklabel_type": null,
            "packages_only": true,
            "pools": [
                {
                    "disks": [
                        "nvme0n1"
                    ],
                    "name": "foo4",
                    "state": "present",
                    "type": "lvm",
                    "volumes": [
                        {
                            "fs_create_options": "",
                            "fs_label": "",
                            "fs_overwrite_existing": true,
                            "fs_type": "xfs",
                            "mount_check": 0,
                            "mount_device_identifier": "uuid",
                            "mount_options": "defaults",
                            "mount_passno": 0,
                            "mount_point": "/opt/test1",
                            "name": "abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwx78",
                            "pool": "foo4",
                            "size": "80g",
                            "state": "present",
                            "type": "lvm"
                        }
                    ]
                }
            ],
            "safe_mode": true,
            "use_partitions": null,
            "volumes": []
        }
    },
    "leaves": [],
    "mounts": [],
    "packages": [
        "lvm2",
        "xfsprogs"
    ],
    "pools": [],
    "volumes": []
}

TASK [storage : make sure required packages are installed] ***********************************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:99
Running dnf
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<localhost> EXEC /bin/sh -c 'echo ~root && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir /root/.ansible/tmp/ansible-tmp-1590074006.57936-5444-49739373224150 && echo ansible-tmp-1590074006.57936-5444-49739373224150="` echo /root/.ansible/tmp/ansible-tmp-1590074006.57936-5444-49739373224150 `" ) && sleep 0'
Using module file /usr/lib/python3.6/site-packages/ansible/modules/packaging/os/dnf.py
<localhost> PUT /root/.ansible/tmp/ansible-local-48834074jvab/tmpxulouv5n TO /root/.ansible/tmp/ansible-tmp-1590074006.57936-5444-49739373224150/AnsiballZ_dnf.py
<localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1590074006.57936-5444-49739373224150/ /root/.ansible/tmp/ansible-tmp-1590074006.57936-5444-49739373224150/AnsiballZ_dnf.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1590074006.57936-5444-49739373224150/AnsiballZ_dnf.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1590074006.57936-5444-49739373224150/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
    "changed": false,
    "invocation": {
        "module_args": {
            "allow_downgrade": false,
            "autoremove": false,
            "bugfix": false,
            "conf_file": null,
            "disable_excludes": null,
            "disable_gpg_check": false,
            "disable_plugin": [],
            "disablerepo": [],
            "download_dir": null,
            "download_only": false,
            "enable_plugin": [],
            "enablerepo": [],
            "exclude": [],
            "install_repoquery": true,
            "install_weak_deps": true,
            "installroot": "/",
            "list": null,
            "lock_timeout": 30,
            "name": [
                "lvm2",
                "xfsprogs"
            ],
            "releasever": null,
            "security": false,
            "skip_broken": false,
            "state": "present",
            "update_cache": false,
            "update_only": false,
            "validate_certs": true
        }
    },
    "msg": "Nothing to do",
    "rc": 0,
    "results": []
}

TASK [storage : manage the pools and volumes to match the specified state] *******************************************************************************************************************************************************************
task path: /root/test/storage/tasks/main-blivet.yml:104
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<localhost> EXEC /bin/sh -c 'echo ~root && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir /root/.ansible/tmp/ansible-tmp-1590074010.442808-5460-148571839017757 && echo ansible-tmp-1590074010.442808-5460-148571839017757="` echo /root/.ansible/tmp/ansible-tmp-1590074010.442808-5460-148571839017757 `" ) && sleep 0'
Using module file /root/test/storage/library/blivet.py
<localhost> PUT /root/.ansible/tmp/ansible-local-48834074jvab/tmpqbs32hel TO /root/.ansible/tmp/ansible-tmp-1590074010.442808-5460-148571839017757/AnsiballZ_blivet.py
<localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1590074010.442808-5460-148571839017757/ /root/.ansible/tmp/ansible-tmp-1590074010.442808-5460-148571839017757/AnsiballZ_blivet.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/libexec/platform-python /root/.ansible/tmp/ansible-tmp-1590074010.442808-5460-148571839017757/AnsiballZ_blivet.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1590074010.442808-5460-148571839017757/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
  File "/tmp/ansible_blivet_payload_6ni037yi/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 810, in run_module
  File "/tmp/ansible_blivet_payload_6ni037yi/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 615, in manage_pool
  File "/tmp/ansible_blivet_payload_6ni037yi/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 540, in manage
  File "/tmp/ansible_blivet_payload_6ni037yi/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 524, in _manage_volumes
  File "/tmp/ansible_blivet_payload_6ni037yi/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 272, in manage
  File "/tmp/ansible_blivet_payload_6ni037yi/ansible_blivet_payload.zip/ansible/modules/blivet.py", line 377, in _create
fatal: [localhost]: FAILED! => {
    "actions": [],
    "changed": false,
    "invocation": {
        "module_args": {
            "disklabel_type": null,
            "packages_only": false,
            "pools": [
                {
                    "disks": [
                        "nvme0n1"
                    ],
                    "name": "foo4",
                    "state": "present",
                    "type": "lvm",
                    "volumes": [
                        {
                            "fs_create_options": "",
                            "fs_label": "",
                            "fs_overwrite_existing": true,
                            "fs_type": "xfs",
                            "mount_check": 0,
                            "mount_device_identifier": "uuid",
                            "mount_options": "defaults",
                            "mount_passno": 0,
                            "mount_point": "/opt/test1",
                            "name": "abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwx78",
                            "pool": "foo4",
                            "size": "80g",
                            "state": "present",
                            "type": "lvm"
                        }
                    ]
                }
            ],
            "safe_mode": false,
            "use_partitions": null,
            "volumes": []
        }
    },
    "leaves": [],
    "mounts": [],
    "msg": "failed to set up volume 'abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwx78'",
    "packages": [],
    "pools": [],
    "volumes": []
}

PLAY RECAP ***********************************************************************************************************************************************************************************************************************************
localhost                  : ok=35   changed=0    unreachable=0    failed=1    skipped=12   rescued=0    ignored=0  

blivert.log

https://pastebin.com/Spkd4f5L

@yizhanglinux yizhanglinux reopened this May 21, 2020
@yizhanglinux
Copy link
Collaborator Author

From my testing, I found the rules for VolumeGroup and LogicalVolume name length during lvm creating.
Here vg means lentgh of VolumeGroup, lv means length of LogicalVolume.

if vg > 55; then
    "msg": "failed to set up pool"
elif vg + lv > 95; then
    PASS: truncate lv to lv+vg=95
elif vg + lv <= 95; then
    if lv > 55; then
        "msg": "failed to set up volume"
    else
        PASS
    fi
fi

Will follow above rules to design test cases for vg/lv name length

@yizhanglinux yizhanglinux changed the title storage role doesn't support 78 characters' length for volume name storage: role doesn't support 78 characters' length for volume name Aug 8, 2020
@japokorn
Copy link
Contributor

japokorn commented Oct 2, 2020

Length of VG and LV names is intentionally limited to 55 characters by blivet.

According to the LVM developers, vgname + lvname is limited to 126 characters minus the number of hyphens, and possibly minus up to another 8 characters in some unspecified set of situations. Instead of figuring all of that out, no one gets a vg or lv name longer than 55.

@japokorn
Copy link
Contributor

japokorn commented Oct 2, 2020

To prevent user confusion from generic error message I have included more information into the message.
(See #174)

@japokorn japokorn linked a pull request Oct 2, 2020 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants