Bootstrapping Ceph OSDs fails
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
kolla |
Fix Released
|
Critical
|
Sam Yaple |
Bug Description
I'm trying to deploy Openstack with Ceph on 10 servers and it is failing at bootstrapping Ceph OSDs
I prepared 4 disks on each of the 10 servers for storage as the example below
$ sudo parted /dev/sda print
Model: ATA ST4000NM0033-9ZM (scsi)
Disk /dev/sda: 4001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 4001GB 4001GB ext4 KOLLA_CEPH_
Here is the part that is failing
TASK: [ceph | Bootstrapping Ceph OSDs] *******
failed: [opst1] => (item=(0, {u'device': u'/dev/sda', u'fs_uuid': u'0e246458-
msg: Container exited with non-zero return code
failed: [opst1] => (item=(1, {u'device': u'/dev/sdb', u'fs_uuid': u'046daf31-
msg: Container exited with non-zero return code
failed: [opst1] => (item=(2, {u'device': u'/dev/sdc', u'fs_uuid': u'ab562d7d-
msg: Container exited with non-zero return code
failed: [opst1] => (item=(3, {u'device': u'/dev/sdd', u'fs_uuid': u'577cd6b0-
msg: Container exited with non-zero return code
FATAL: all hosts have already failed -- aborting
Changed in kolla: | |
importance: | Undecided → Critical |
milestone: | none → mitaka-rc2 |
status: | New → Triaged |
Changed in kolla: | |
assignee: | nobody → Sam Yaple (s8m) |
Changed in kolla: | |
status: | Triaged → In Progress |
This is most commonly a problem when youve attempted multiple ceph deploys and not properly cleaned the environment.
please remove all ceph containers and volumes as well as all ceph configs folders in /etc/kolla/* on all nodes and attempt this again.
If you still have an issue, run docker logs ceph_osd_ bootstrap_ 0 (or whatever container name you have) and post those logs here