My Ceph deployment had a problem with a removed disk. Despite OSD was not listed Ceph, it's auth key was still exists.
Therefore whenever I try to add a new OSD to deployment was failing due to ceph attempts to use same osd number again.
It's a deadlock situation for ceph-osd, none of zap-disk or add-disk command works.
I discovered that the volume listed on lsblk meanwhile none of vgs, pgs or lgs returns nothing. There was a backup for the vg however vgcfgrestore was denying to restore as well.
The solution is find and remove vg manually. Then zap-disk and add-disk commands starts to work again.
dmsetup info
dmsetup remove <failed vg name>
These commands should be implemented into ceph-osd charms at least as an additional action to clear volumes properly.
My Ceph deployment had a problem with a removed disk. Despite OSD was not listed Ceph, it's auth key was still exists.
Therefore whenever I try to add a new OSD to deployment was failing due to ceph attempts to use same osd number again.
It's a deadlock situation for ceph-osd, none of zap-disk or add-disk command works.
I discovered that the volume listed on lsblk meanwhile none of vgs, pgs or lgs returns nothing. There was a backup for the vg however vgcfgrestore was denying to restore as well.
The solution is find and remove vg manually. Then zap-disk and add-disk commands starts to work again.
dmsetup info
dmsetup remove <failed vg name>
These commands should be implemented into ceph-osd charms at least as an additional action to clear volumes properly.