ERROR: error creating empty object store in /var/lib/ceph/osd/ceph-NN/: (11) Resource temporarily unavailable
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Ceph OSD Charm |
Won't Fix
|
Low
|
Unassigned | ||
ceph (Ubuntu) |
Fix Released
|
Medium
|
Unassigned |
Bug Description
ussuri-focal, charms revision 20.10.
ceph-osd fails to initialize 5 out of 36 OSDs on each storage node every time I redeploy.
I have 3 storage nodes. Each node has 36x 4TB disks used as OSDs. Bcache for OSDs is also set up. Every time I redeploy the bundle, each storage node ends up with only 31 OSDs initialized. During the initialization of remaining 5 OSD, the following errors occur:
unit-ceph-osd-0: 22:03:20 WARNING unit.ceph-
unit-ceph-osd-0: 22:03:20 WARNING unit.ceph-
unit-ceph-osd-0: 22:03:20 WARNING unit.ceph-
unit-ceph-osd-0: 22:03:20 WARNING unit.ceph-
unit-ceph-osd-0: 22:03:20 WARNING unit.ceph-
unit-ceph-osd-0: 22:03:20 WARNING unit.ceph-
unit-ceph-osd-0: 22:03:20 WARNING unit.ceph-
I'm attaching the logs for both failed and successful OSD initialization.
I tried deploying with the limited number of OSDs initially configured in the bundle. To start with, I configured only 6 disks in osd-devices config option.
Once the first 6 OSDs have been initialized, I added next 10 disks with `juju config ceph-osd osd-devices=...`. Additional 10 OSDs were successfully initialized.
I repeated this process two more times to reach 36 OSDs in total. But during the processing of the last batch, the error occurred again. And again 5 OSDs per each storage node failed to initialize.