add-disk action with ceph jewel breaks with default bluestore setting
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Ceph OSD Charm |
Fix Released
|
High
|
Chris MacNaughton | ||
charms.ceph |
Fix Released
|
High
|
Chris MacNaughton |
Bug Description
Deployed 3 osd cluster all good. Then did add-disk action for 3 extra disks and got the following for all 3 new disks:
2019-03-15 12:28:33.687278 7f0ad38e0b00 -1 unable to create object store
2019-03-15 12:29:50.635098 7f4aac13eb00 0 set uid:gid to 64045:64045 (ceph:ceph)
2019-03-15 12:29:50.635236 7f4aac13eb00 0 ceph version 10.2.11 (e4b061b47f07f5
2019-03-15 12:29:50.635305 7f4aac13eb00 5 object store type is bluestore
2019-03-15 12:29:50.635340 7f4aac13eb00 -1 *** experimental feature 'bluestore' is not enabled ***
This feature is marked as experimental, which means it
- is untested
- is unsupported
- may corrupt your data
- may break your cluster is an unrecoverable fashion
To enable this feature, add this to your ceph.conf:
enable experimental unrecoverable data corrupting features = bluestore
Changed in charm-ceph-osd: | |
milestone: | 19.04 → 19.07 |
Changed in charm-ceph-osd: | |
milestone: | 19.07 → 19.10 |
Changed in charm-ceph-osd: | |
milestone: | 19.10 → 20.01 |
In other words, I deployed a Ceph cluster from the Ocata UCA (i.e. Jewel) with charm defaults left alone so bluestore=enabled. The deployment worked fine and I ended up with 3 Filestore OSDs as expected. I then did add-disk for 3 more OSD disks and it failed with ^^.