Brief Description
-----------------
After add a new pv , add ceph and unlock the controller, is not possible to get admin credentials.
Severity
--------
Critical
Steps to Reproduce
------------------
- run bootstrap
- configure ntp
- configure disk:
ROOT_DISK=$(system host-show controller-0 | grep rootfs | awk '{print $4}')
ROOT_DISK_UUID=$(system host-disk-list controller-0 --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
EXT_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol controller-0 ${ROOT_DISK_UUID} 25)
EXT_PARTITION_UUID=$(echo ${EXT_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
system host-pv-add controller-0 cgts-vg ${EXT_PARTITION_UUID}
- add ceph and OSD (I'm trying to add in the additional disk):
system storage-backend-add ceph --confirmed
system host-disk-list controller-0
system host-disk-list controller-0 | awk '/\/dev\/nvme1n1/{print $2}' | xargs -i system host-stor-add controller-0 {}
system host-stor-list controller-0
- unlock
Expected Behavior
------------------
Get the system working fine and get the admin credentials.
Actual Behavior
----------------
Unlock runs fine, but after the reboot isn't possible to get the admin credentials
Reproducibility
---------------
Reproducible
System Configuration
--------------------
AIO-SX with 2 disks (100G + 30G) running with IPv6
Branch/Pull Time/Commit
-----------------------
stx master built on "2021-04-27"
Timestamp/Logs
--------------
#puppet.log
2021-05-04T16:57:34.632 Notice: 2021-05-04 16:57:34 +0000 /Stage[main]/Platform::Lvm::Controller/Platform::Lvm::Global_filter[transition filter controller]/File_line[transition filter controller: update lvm global_filter]/ensure: created
2021-05-04T16:57:50.658 Debug: 2021-05-04 16:57:50 +0000 Exec[umount /var/lib/nova/instances](provider=posix): Executing check 'test -e /var/lib/nova/instances'
2021-05-04T16:57:50.660 Debug: 2021-05-04 16:57:50 +0000 Executing: 'test -e /var/lib/nova/instances'
2021-05-04T16:57:50.666 Debug: 2021-05-04 16:57:50 +0000 Exec[umount /dev/nova-local/instances_lv](provider=posix): Executing check 'test -e /dev/nova-local/instances_lv'
2021-05-04T16:57:50.668 Debug: 2021-05-04 16:57:50 +0000 Executing: 'test -e /dev/nova-local/instances_lv'
2021-05-04T16:57:50.672 Debug: 2021-05-04 16:57:50 +0000 Executing: '/usr/sbin/pvs /dev/disk/by-path/pci-0000:00:04.0-nvme-1-part5'
2021-05-04T16:57:50.724 Debug: 2021-05-04 16:57:50 +0000 Executing: '/usr/sbin/vgs cgts-vg'
2021-05-04T16:57:50.771 Configuration setting "global_filter" invalid. It's not part of any section.
2021-05-04T16:57:50.819 Debug: 2021-05-04 16:57:50 +0000 Executing: '/usr/sbin/vgreduce --removemissing --force cgts-vg'
2021-05-04T16:57:50.867 Configuration setting "global_filter" invalid. It's not part of any section.
2021-05-04T16:57:50.913 Debug: 2021-05-04 16:57:50 +0000 Executing: '/usr/sbin/pvs -o pv_name,vg_name,lv_name --separator ,'
2021-05-04T16:57:50.966 Error: 2021-05-04 16:57:50 +0000 Illegal quoting in line 1.
2021-05-04T16:57:51.078 Error: 2021-05-04 16:57:50 +0000 /Stage[main]/Platform::Lvm::Vg::Cgts_vg/Volume_group[cgts-vg]/physical_volumes: change from ["/dev/disk/by-path/pci-0000:00:04.0-nvme-1-part5"] to /dev/disk/by-path/pci-0000:00:04.0-nvme-1-part5 /dev/disk/by-path/pci-0000:00:04.0-nvme-1-part6 failed: Illegal quoting in line 1.
2021-05-04T16:57:51.750 Debug: 2021-05-04 16:57:51 +0000 /Stage[main]/Platform::Worker::Storage/Exec[remove udev leftovers]/unless: Configuration setting "global_filter" invalid. It's not part of any section.
2021-05-04T16:57:51.752 Debug: 2021-05-04 16:57:51 +0000 /Stage[main]/Platform::Worker::Storage/Exec[remove udev leftovers]/unless: Volume group "nova-local" not found
2021-05-04T16:57:51.754 Debug: 2021-05-04 16:57:51 +0000 /Stage[main]/Platform::Worker::Storage/Exec[remove udev leftovers]/unless: Cannot process volume group nova-local
2021-05-04T16:57:51.755 Debug: 2021-05-04 16:57:51 +0000 Exec[remove udev leftovers](provider=posix): Executing 'rm -rf /dev/nova-local || true'
2021-05-04T16:57:51.757 Debug: 2021-05-04 16:57:51 +0000 Executing: 'rm -rf /dev/nova-local || true'
2021-05-04T16:57:51.759 Notice: 2021-05-04 16:57:51 +0000 /Stage[main]/Platform::Worker::Storage/Exec[remove udev leftovers]/returns: executed successfully
2021-05-04T16:57:51.761 Debug: 2021-05-04 16:57:51 +0000 /Stage[main]/Platform::Worker::Storage/Exec[remove udev leftovers]: The container Class[Platform::Worker::Storage] will propagate my refresh event
2021-05-04T16:57:51.764 Debug: 2021-05-04 16:57:51 +0000 Exec[remove device mapper mapping](provider=posix): Executing check 'test -e /dev/mapper/nova--local-instances_lv'
2021-05-04T16:57:51.766 Debug: 2021-05-04 16:57:51 +0000 Executing: 'test -e /dev/mapper/nova--local-instances_lv'
2021-05-04T16:57:51.768 Debug: 2021-05-04 16:57:51 +0000 Class[Platform::Worker::Storage]: The container Stage[main] will propagate my refresh event
screening: marking as stx.6.0 given this was not seen in the regression testing for stx.5.0. The issue is related to a specific scenario related to storage. Will not hold up stx.5.0 for this, but can be considered for cherrypick if the fix is needed for that release in the future.