after zap-disk of all ceph paths, remove-unit will re-configure ceph-osd services/mounts
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Ceph OSD Charm |
Fix Committed
|
Medium
|
Unassigned |
Bug Description
In a bionic distro/queens ceph-osd-294 charm environment, with nova-compute and ceph-osd hulk-smashed on each metal, I am working through a process where I'm trying to attempt to remove vault osd encryption from the deployment.
My rough process is:
juju config ceph-osd --reset osd-encrypt
Then, for each ceph-osd node:
ceph osd out $id (for each osd device on a host)
Wait for ceph to settle rebalance
ceph osd purge $id --yes-i-
juju zap-disk ceph-osd/$unit_id zap-devices="$(juju config ceph-osd osd-devices)" yes-i-really-
juju remove-unit ceph-osd/$unit_id
ceph osd crush remove $hostname
upgrade kernel on the machine
reboot the machine
juju add-unit ceph-osd --to $machine_id
When I get to 'ceph osd crush remove $hostname', I find that the host still exists in the OSD tree, and that the OSD devices have been re-added by the ceph-osd charm upon juju remove-unit since the osd-devices were clean and ready to be re-formatted. I think this is caused by the reactive framework not recognizing the intent to remove the unit, seeing the unconfigured disks, and trying to re-configure it's relations with ceph-mon, etc.
I believe some investigation may be necessary to trap hooks like ceph-mon-
As I'm working through this on a site with many hosts to perform this change on, I'll try to capture either a workaround or a clean process and logs around it to determine why this may be happening.
I suspect that just zapping the disks and running a config-changed may be more prudent than remove/install unit, as the disks were re-added without encryption.
tags: | added: scaleback |
Changed in charm-ceph-osd: | |
importance: | Wishlist → Medium |
importance: | Medium → Wishlist |
importance: | Wishlist → Medium |
The mon-relation- departed triggers a ceph bootstrap and disk rescan:
unit-ceph-osd-15: 18:29:05 INFO unit.ceph- osd/15. juju-log mon:50: ceph bootstrapped, rescanning disks
https:/ /pastebin. canonical. com/p/Cf6Q79xVS S/