zap-disk action should fail if target disk is actively used by LVM, or handle the LVM removal
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Ceph OSD Charm |
Fix Released
|
Medium
|
nikhil kshirsagar |
Bug Description
Today I ran zap-disk to try to reset a previously-deployed device for the purpose of re-running add-disk. This unfortunately failed since zap-disk does not appear to handle when target disks are hosting LVM volumes.
As the charm provisioned the LVM volumes upon add-disk, it feels like zap-disk (or some other action) should handle cleanup of those volumes. Alternatively, it should fail and alert the user that LVM volume cleanup needs to be done first.
I'm presently uncertain of the exact charm version involved; the cloud in question is currently having some maintenance done which impacts juju. I'm guessing this affects the current trunk but am not certain; please dismiss this if it's already fixed.
In case someone else hits this issue, I resolved it via "pvremove --force <device>"; because of running zap-disk, LVM actions for this device at the volume group or logical volume level were not working. This then allowed me to run add-disk successfully afterwards.
Changed in charm-ceph-osd: | |
status: | New → Triaged |
importance: | Undecided → Medium |
Changed in charm-ceph-osd: | |
milestone: | none → 21.10 |
Changed in charm-ceph-osd: | |
status: | Fix Committed → Fix Released |
Fix proposed to branch: master /review. opendev. org/c/openstack /charm- ceph-osd/ +/804520
Review: https:/