bootstrapped-osds not updated when using zap/add-disk actions
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Ceph OSD Charm |
Fix Released
|
High
|
Edward Hope-Morley |
Bug Description
I deployed a ceph cluster using next (20.02) charms where each node has 6 osds. All was well, every unit active/idle. I then reformatted two osds (keeping the same device name) and they came up ok but the ceph-mon is now stuck with:
ceph-mon/0 waiting idle 0/lxd/0 10.230.56.225 Monitor bootstrapped but waiting for number of OSDs to reach expected-osd-count (18)
Checking the relation between mon and osd i see:
$ juju run -u ceph-mon/0 -- relation-get -r osd:10 bootstrapped-osds ceph-osd/0
4
But they are all definitely up.
$ juju ssh ceph-osd/0 -- pgrep -alf bin/ceph-osd
1108725 /usr/bin/ceph-osd -f --cluster ceph --id 1 --setuser ceph --setgroup ceph
1109368 /usr/bin/ceph-osd -f --cluster ceph --id 4 --setuser ceph --setgroup ceph
1109958 /usr/bin/ceph-osd -f --cluster ceph --id 6 --setuser ceph --setgroup ceph
1110639 /usr/bin/ceph-osd -f --cluster ceph --id 8 --setuser ceph --setgroup ceph
1113021 /usr/bin/ceph-osd -f --cluster ceph --id 10 --setuser ceph --setgroup ceph
1113667 /usr/bin/ceph-osd -f --cluster ceph --id 12 --setuser ceph --setgroup ceph
Changed in charm-ceph-osd: | |
importance: | Undecided → High |
Changed in charm-ceph-osd: | |
milestone: | none → 20.02 |
tags: | added: sts |
Changed in charm-ceph-osd: | |
status: | Fix Committed → Fix Released |
Bit more visualistaion - https:/ /pastebin. ubuntu. com/p/wk4M9bgXG n/
So the count is updated in ceph_hooks. prepare_ disks_and_ activate( ) which is not called directly from any action but in my case was called subsequent to a zap-disk but not after the add-disk actions hence why the count was never updated.