There are duplicated host bucket in ceph osd tree
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
kolla-ansible |
Fix Released
|
Undecided
|
Unassigned |
Bug Description
Deploy kolla rocky with ceph enabled. after it is done, you will found there are two same node bucket in ceph osd tree like below
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 1.00000 root default
-2 1.00000 host 10.3.0.21
-2 1.00000 host node1
0 hdd 1.00000 osd.0 up 1.00000 1.00000
the 10.3.0.21 and node1 are the same node.
The reason come from this[0] patch. The `osd crush update on start` logical is moved from `ceph-osd-
For the less confused and ceph upgrade impact, i think we should disable `osd crush update on start` feature in default.
[0] https:/
Reviewed: https:/ /review. openstack. org/615483 /git.openstack. org/cgit/ openstack/ kolla-ansible/ commit/ ?id=6db3f9f3422 400e408136501b8 17ded3cddc5505
Committed: https:/
Submitter: Zuul
Branch: master
commit 6db3f9f3422400e 408136501b817de d3cddc5505
Author: Jeffrey Zhang <email address hidden>
Date: Mon Nov 5 15:02:40 2018 +0800
Disable ceph osd crush update on start in default
The buggy come from ceph changes[0], which is included since ceph osd osd-prestart. sh` to ceph-osd startup process. So ceph-osd will
v11.0.0. The `osd crush update on start` logical is moved from
`ceph-
create buckets by node hostname automatically. Whereas, kolla is
creating buckets by node ip
For the less confused and ceph upgrade impact, disable `osd crush update
on start` is a better choice
[0] https:/ /github. com/ceph/ ceph/commit/ a28b71e3c944654 1e14795324f2ec1 f9d69c9187
Change-Id: Ibbeac9505c9957 319126267dbe6bd 7a2cac11f0c
Closes-Bug: #1801662