AIO-SX failed to start up after unlock due to lvm_global_filter.

Bug #1927762 reported by Mihnea Saracin
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
StarlingX
Fix Released
Medium
Mihnea Saracin

Bug Description

Brief Description
-----------------

After add a new pv , add ceph and unlock the controller, is not possible to get admin credentials.

Severity
--------

Critical

Steps to Reproduce
------------------

- run bootstrap

- configure ntp

- configure disk:

ROOT_DISK=$(system host-show controller-0 | grep rootfs | awk '{print $4}')
ROOT_DISK_UUID=$(system host-disk-list controller-0 --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
EXT_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol controller-0 ${ROOT_DISK_UUID} 25)
EXT_PARTITION_UUID=$(echo ${EXT_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
system host-pv-add controller-0 cgts-vg ${EXT_PARTITION_UUID}

- add ceph and OSD (I'm trying to add in the additional disk):

system storage-backend-add ceph --confirmed
system host-disk-list controller-0
system host-disk-list controller-0 | awk '/\/dev\/nvme1n1/{print $2}' | xargs -i system host-stor-add controller-0 {}
system host-stor-list controller-0

- unlock

Expected Behavior
------------------

Get the system working fine and get the admin credentials.

Actual Behavior
----------------

Unlock runs fine, but after the reboot isn't possible to get the admin credentials

Reproducibility
---------------

Reproducible

System Configuration
--------------------

AIO-SX with 2 disks (100G + 30G) running with IPv6

Branch/Pull Time/Commit
-----------------------

stx master built on "2021-04-27"

Timestamp/Logs
--------------

#puppet.log

2021-05-04T16:57:34.632 Notice: 2021-05-04 16:57:34 +0000 /Stage[main]/Platform::Lvm::Controller/Platform::Lvm::Global_filter[transition filter controller]/File_line[transition filter controller: update lvm global_filter]/ensure: created
2021-05-04T16:57:50.658 Debug: 2021-05-04 16:57:50 +0000 Exec[umount /var/lib/nova/instances](provider=posix): Executing check 'test -e /var/lib/nova/instances'
2021-05-04T16:57:50.660 Debug: 2021-05-04 16:57:50 +0000 Executing: 'test -e /var/lib/nova/instances'
2021-05-04T16:57:50.666 Debug: 2021-05-04 16:57:50 +0000 Exec[umount /dev/nova-local/instances_lv](provider=posix): Executing check 'test -e /dev/nova-local/instances_lv'
2021-05-04T16:57:50.668 Debug: 2021-05-04 16:57:50 +0000 Executing: 'test -e /dev/nova-local/instances_lv'
2021-05-04T16:57:50.672 Debug: 2021-05-04 16:57:50 +0000 Executing: '/usr/sbin/pvs /dev/disk/by-path/pci-0000:00:04.0-nvme-1-part5'
2021-05-04T16:57:50.724 Debug: 2021-05-04 16:57:50 +0000 Executing: '/usr/sbin/vgs cgts-vg'
2021-05-04T16:57:50.771 Configuration setting "global_filter" invalid. It's not part of any section.
2021-05-04T16:57:50.819 Debug: 2021-05-04 16:57:50 +0000 Executing: '/usr/sbin/vgreduce --removemissing --force cgts-vg'
2021-05-04T16:57:50.867 Configuration setting "global_filter" invalid. It's not part of any section.
2021-05-04T16:57:50.913 Debug: 2021-05-04 16:57:50 +0000 Executing: '/usr/sbin/pvs -o pv_name,vg_name,lv_name --separator ,'
2021-05-04T16:57:50.966 Error: 2021-05-04 16:57:50 +0000 Illegal quoting in line 1.
2021-05-04T16:57:51.078 Error: 2021-05-04 16:57:50 +0000 /Stage[main]/Platform::Lvm::Vg::Cgts_vg/Volume_group[cgts-vg]/physical_volumes: change from ["/dev/disk/by-path/pci-0000:00:04.0-nvme-1-part5"] to /dev/disk/by-path/pci-0000:00:04.0-nvme-1-part5 /dev/disk/by-path/pci-0000:00:04.0-nvme-1-part6 failed: Illegal quoting in line 1.
2021-05-04T16:57:51.750 Debug: 2021-05-04 16:57:51 +0000 /Stage[main]/Platform::Worker::Storage/Exec[remove udev leftovers]/unless: Configuration setting "global_filter" invalid. It's not part of any section.
2021-05-04T16:57:51.752 Debug: 2021-05-04 16:57:51 +0000 /Stage[main]/Platform::Worker::Storage/Exec[remove udev leftovers]/unless: Volume group "nova-local" not found
2021-05-04T16:57:51.754 Debug: 2021-05-04 16:57:51 +0000 /Stage[main]/Platform::Worker::Storage/Exec[remove udev leftovers]/unless: Cannot process volume group nova-local
2021-05-04T16:57:51.755 Debug: 2021-05-04 16:57:51 +0000 Exec[remove udev leftovers](provider=posix): Executing 'rm -rf /dev/nova-local || true'
2021-05-04T16:57:51.757 Debug: 2021-05-04 16:57:51 +0000 Executing: 'rm -rf /dev/nova-local || true'
2021-05-04T16:57:51.759 Notice: 2021-05-04 16:57:51 +0000 /Stage[main]/Platform::Worker::Storage/Exec[remove udev leftovers]/returns: executed successfully
2021-05-04T16:57:51.761 Debug: 2021-05-04 16:57:51 +0000 /Stage[main]/Platform::Worker::Storage/Exec[remove udev leftovers]: The container Class[Platform::Worker::Storage] will propagate my refresh event
2021-05-04T16:57:51.764 Debug: 2021-05-04 16:57:51 +0000 Exec[remove device mapper mapping](provider=posix): Executing check 'test -e /dev/mapper/nova--local-instances_lv'
2021-05-04T16:57:51.766 Debug: 2021-05-04 16:57:51 +0000 Executing: 'test -e /dev/mapper/nova--local-instances_lv'
2021-05-04T16:57:51.768 Debug: 2021-05-04 16:57:51 +0000 Class[Platform::Worker::Storage]: The container Stage[main] will propagate my refresh event

CVE References

Changed in starlingx:
assignee: nobody → Mihnea Saracin (msaracin)
Revision history for this message
Ghada Khalil (gkhalil) wrote :

screening: marking as stx.6.0 given this was not seen in the regression testing for stx.5.0. The issue is related to a specific scenario related to storage. Will not hold up stx.5.0 for this, but can be considered for cherrypick if the fix is needed for that release in the future.

tags: added: stx.storage
tags: added: stx.6.0
Changed in starlingx:
importance: Undecided → Medium
status: New → Triaged
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to stx-puppet (master)

Fix proposed to branch: master
Review: https://review.opendev.org/c/starlingx/stx-puppet/+/790398

Changed in starlingx:
status: Triaged → In Progress
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to stx-puppet (master)

Reviewed: https://review.opendev.org/c/starlingx/stx-puppet/+/790398
Committed: https://opendev.org/starlingx/stx-puppet/commit/eec3008f600aeeb69a42338ed44332228a862d11
Submitter: "Zuul (22348)"
Branch: master

commit eec3008f600aeeb69a42338ed44332228a862d11
Author: Mihnea Saracin <email address hidden>
Date: Mon May 10 13:09:52 2021 +0300

    Serialize updates to global_filter in the AIO manifest

    Right now, looking at the aio manifest:
    https://review.opendev.org/c/starlingx/stx-puppet/+/780600/15/puppet-manifests/src/manifests/aio.pp
    there are 3 classes that update
    in parallel the lvm global_filter:
    - include ::platform::lvm::controller
    - include ::platform::worker::storage
    - include ::platform::lvm::compute
    And this generates some errors.

    We fix this by adding dependencies between the above classes
    in order to update the global_filter in a serial mode.

    Closes-Bug: 1927762
    Signed-off-by: Mihnea Saracin <email address hidden>
    Change-Id: If6971e520454cdef41138b2f29998c036d8307ff

Changed in starlingx:
status: In Progress → Fix Released
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to stx-puppet (f/centos8)

Fix proposed to branch: f/centos8
Review: https://review.opendev.org/c/starlingx/stx-puppet/+/792009

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on stx-puppet (f/centos8)

Change abandoned by "Chuck Short <email address hidden>" on branch: f/centos8
Review: https://review.opendev.org/c/starlingx/stx-puppet/+/792009

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to stx-puppet (f/centos8)

Fix proposed to branch: f/centos8
Review: https://review.opendev.org/c/starlingx/stx-puppet/+/792013

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on stx-puppet (f/centos8)

Change abandoned by "Chuck Short <email address hidden>" on branch: f/centos8
Review: https://review.opendev.org/c/starlingx/stx-puppet/+/792013

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to stx-puppet (f/centos8)

Fix proposed to branch: f/centos8
Review: https://review.opendev.org/c/starlingx/stx-puppet/+/792018

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on stx-puppet (f/centos8)

Change abandoned by "Chuck Short <email address hidden>" on branch: f/centos8
Review: https://review.opendev.org/c/starlingx/stx-puppet/+/792018

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to stx-puppet (f/centos8)

Fix proposed to branch: f/centos8
Review: https://review.opendev.org/c/starlingx/stx-puppet/+/792029

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to stx-puppet (f/centos8)
Download full text (48.0 KiB)

Reviewed: https://review.opendev.org/c/starlingx/stx-puppet/+/792029
Committed: https://opendev.org/starlingx/stx-puppet/commit/2b026190a3cb6d561b6ec4a46dfb3add67f1fa69
Submitter: "Zuul (22348)"
Branch: f/centos8

commit 3e3940824dfb830ebd39fd93265b983c6a22fc51
Author: Dan Voiculeasa <email address hidden>
Date: Thu May 13 18:03:45 2021 +0300

    Enable kubelet support for pod pid limit

    Enable limiting the number of pids inside of pods.

    Add a default value to protect against a missing value.
    Default to 750 pids limit to align with service parameter default
    value for most resource consuming StarlingX optional app (openstack).
    In fact any value above service parameter minimum value is good for the
    default.

    Closes-Bug: 1928353
    Signed-off-by: Dan Voiculeasa <email address hidden>
    Change-Id: I10c1684fe3145e0a46b011f8e87f7a23557ddd4a

commit 0c16d288fbc483103b7ba5dad7782e97f59f4e17
Author: Jessica Castelino <email address hidden>
Date: Tue May 11 10:21:57 2021 -0400

    Safe restart of the etcd SM service in etcd upgrade runtime class

    While upgrading the central cloud of a DC system, activation failed
    because there was an unexpected SWACT to controller-1. This was due
    to the etcd upgrade script. Part of this script runs the etcd
    manifest. This triggers a reload/restart of the etcd service. As this
    is done outside of the sm, sm saw the process failure and triggered
    the SWACT.

    This commit modifies platform::etcd::upgrade::runtime puppet class
    to do a safe restart of the etcd SM service and thus, solve the
    issue.

    Change-Id: I3381b6976114c77ee96028d7d96a00302ad865ec
    Signed-off-by: Jessica Castelino <email address hidden>
    Closes-Bug: 1928135

commit eec3008f600aeeb69a42338ed44332228a862d11
Author: Mihnea Saracin <email address hidden>
Date: Mon May 10 13:09:52 2021 +0300

    Serialize updates to global_filter in the AIO manifest

    Right now, looking at the aio manifest:
    https://review.opendev.org/c/starlingx/stx-puppet/+/780600/15/puppet-manifests/src/manifests/aio.pp
    there are 3 classes that update
    in parallel the lvm global_filter:
    - include ::platform::lvm::controller
    - include ::platform::worker::storage
    - include ::platform::lvm::compute
    And this generates some errors.

    We fix this by adding dependencies between the above classes
    in order to update the global_filter in a serial mode.

    Closes-Bug: 1927762
    Signed-off-by: Mihnea Saracin <email address hidden>
    Change-Id: If6971e520454cdef41138b2f29998c036d8307ff

commit 97371409b9b2ae3f0db6a6a0acaeabd74927160e
Author: Steven Webster <email address hidden>
Date: Fri May 7 15:33:43 2021 -0400

    Add SR-IOV rate-limit dependency

    Currently, the binding of an SR-IOV virtual function (VF) to a
    driver has a dependency on platform::networking. This is needed
    to ensure that SR-IOV is enabled (VFs created) before actually
    doing the bind.

    This dependency does not exist for configuring the VF rate-limits
    however. There is a cha...

tags: added: in-f-centos8
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.