ceph health error in case of 1 ssd disk
Affects | Status | Importance | Assigned to | Milestone | ||
---|---|---|---|---|---|---|
Juniper Openstack | Status tracked in Trunk | |||||
R4.0 |
Won't Fix
|
Medium
|
Jeya ganesh babu J | |||
Trunk |
Invalid
|
Medium
|
Jeya ganesh babu J |
Bug Description
r4.0-11 mitaka provision of a 14.04.5 cluster with ceph storage.
Ceph health error in case of 1 ssd disk:
root@cmbu-
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
aa85e5e1559d 10.87.140.
root@cmbu-
root@cmbu-
cluster 14e36cda-
health HEALTH_ERR
64 pgs are stuck inactive for more than 300 seconds
64 pgs degraded
64 pgs stuck inactive
64 pgs undersized
monmap e1: 3 mons at {cmbu-ceph-
osdmap e99: 12 osds: 12 up, 12 in
flags sortbitwise
pgmap v315: 1600 pgs, 7 pools, 0 bytes data, 0 objects
445 MB used, 8651 GB / 8651 GB avail
root@cmbu-
exit
root@cmbu-
Though the replica size is set to 1 in the cluster.json, the replica seems to be set to 2 for the ssd drive. But since there is only one ssd drive, ceph is showing error.
Not a major issue. reprioritizing storage activity.