Just adding that I've worked around this issue with the following added to the lvm2-monitor overrides (/etc/systemd/system/lvm2-monitor.service.d/custom.conf):
[Service]
ExecStartPre=/bin/sleep 60
This results in 100% success for every single boot, with no missed disks nor missed LVM volumes applied to those block devices.
We've also disabled nvme multipathing on every Ceph storage node with the following in /etc/d/g kernel boot args:
nvme_core.multipath=0
Note: This LP was cloned from an internal customer case where their Ceph storage nodes were directly impacted by this issue, and this is the current workaround deployed, until/unless we can find a consistent RC for this issue in an upstream package.
Just adding that I've worked around this issue with the following added to the lvm2-monitor overrides (/etc/systemd/ system/ lvm2-monitor. service. d/custom. conf):
[Service] /bin/sleep 60
ExecStartPre=
This results in 100% success for every single boot, with no missed disks nor missed LVM volumes applied to those block devices.
We've also disabled nvme multipathing on every Ceph storage node with the following in /etc/d/g kernel boot args:
nvme_core. multipath= 0
Note: This LP was cloned from an internal customer case where their Ceph storage nodes were directly impacted by this issue, and this is the current workaround deployed, until/unless we can find a consistent RC for this issue in an upstream package.