1/3 nodes errors on ceph deployment with message 'ceph-osd/0*hook failed: "config-changed"
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Ceph OSD Charm |
Fix Released
|
Undecided
|
Unassigned |
Bug Description
While deploying ceph on Juju with an Openstack cloud via this tutorial (https:/
2021-10-08 07:55:30 ERROR juju.worker.
2021-10-08 07:55:30 WARNING unit.ceph-
To understand this error message better you can view the source code here (https:/
if not osd_journal.
raise ValueError(
What this error message is essentially saying is that the osd-journal and the osd-device are on the same device.
We can validate that this is indeed the case after further inspection. It seems that the osd-journal for this device is the same as the default device for osd-devices.
Default for osd-devices:
juju config ceph-osd | grep -i -C 5 devices
Location of osd-journal:
juju run -u ceph-osd/0 storage-list
juju run -u ceph-osd/0 'storage-get -s osd-journals/2'.
The workaround solution here is to configure the osd-device to a different device with: juju config ceph-osd osd-devices=
However in a more permanent solution the charm would be able to manage this during deployment and make sure that both osd-journals and osd-devices were not allocated to the same resource
description: | updated |
Changed in charm-ceph-osd: | |
milestone: | none → 22.04 |
Changed in charm-ceph-osd: | |
status: | Fix Committed → Fix Released |
This is an issue with the ceph-osd charm default for osd-devices which is '/dev/vdb', when using --storage (juju storage) there is no guarantee of which device will be associated with osd-devices and which one to osd-journals.
A workaround to make this work at deploy time is to pass `--config osd-devices="" ` this will pass an empty list of device in the juju config.
We should re-evaluate if it's a good idea to use /dev/vdb as the default.