[2.7] ceph-osd stuck in "agent initializing"
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Canonical Juju |
Fix Released
|
Critical
|
Harry Pidcock |
Bug Description
There appears to be a regression from 2.6.9. I can't use Ceph on AWS. User-facing symptom:
Unit Workload Agent Machine Public address Ports Message
ceph-mon/0* waiting idle 0 54.234.190.27 Monitor bootstrapped but waiting for number of OSDs to reach expected-osd-count (3)
ceph-mon/1 waiting idle 1 18.212.147.104 Monitor bootstrapped but waiting for number of OSDs to reach expected-osd-count (3)
ceph-mon/2 waiting idle 2 75.101.192.68 Monitor bootstrapped but waiting for number of OSDs to reach expected-osd-count (3)
ceph-osd/0* waiting allocating 3 54.172.158.50 agent initializing
ceph-osd/1 waiting allocating 4 34.229.113.197 agent initializing
ceph-osd/2 waiting allocating 5 3.208.87.212 agent initializing
Details here:
https:/
Looks similar to bug #1778033.
Changed in juju: | |
status: | New → Incomplete |
Changed in juju: | |
milestone: | none → 2.6.10 |
importance: | Undecided → Critical |
status: | New → Fix Committed |
Changed in juju: | |
status: | Fix Committed → Fix Released |
I tried your steps with a 2.7 build straight out of the develop tree with success.
Model Controller Cloud/Region Version SLA Timestamp
default hpidcock aws/us-east-1 2.7-beta1.1 unsupported 12:18:17+10:00
App Version Status Scale Charm Store Rev OS Notes
ceph-mon 12.2.12 active 3 ceph-mon jujucharms 42 ubuntu
ceph-osd 12.2.12 active 3 ceph-osd jujucharms 291 ubuntu
Unit Workload Agent Machine Public address Ports Message
ceph-mon/0* active idle 0 3.230.170.116 Unit is ready and clustered
ceph-mon/1 active idle 1 100.26.17.9 Unit is ready and clustered
ceph-mon/2 active idle 2 50.19.130.214 Unit is ready and clustered
ceph-osd/0 active idle 3 3.226.252.64 Unit is ready (2 OSD)
ceph-osd/1* active idle 4 34.204.42.232 Unit is ready (2 OSD)
ceph-osd/2 active executing 5 18.210.15.39 Unit is ready (2 OSD)
Machine State DNS Inst id Series AZ Message
0 started 3.230.170.116 i-07f143c04f0d970e2 bionic us-east-1a running
1 started 100.26.17.9 i-05ec51127fe7cd483 bionic us-east-1b running
2 started 50.19.130.214 i-04d941f270cc49bff bionic us-east-1c running
3 started 3.226.252.64 i-0e09265fc266dabc3 bionic us-east-1a running
4 started 34.204.42.232 i-0c3baba4c1390e313 bionic us-east-1c running
5 started 18.210.15.39 i-06b53f0edb5e3017a bionic us-east-1b running
Is it possible you are hitting AWS limits?