TASK [ceph : Get OSD stat percentage]: null and null cannot be divided
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
tripleo |
Fix Released
|
High
|
John Fulton |
Bug Description
Deployment with internal Ceph fails with the following message:
TASK [ceph : Get OSD stat percentage] *******
Friday 05 June 2020 20:09:42 +0000 (0:00:00.298) 0:33:33.740 ***********
fatal: [undercloud -> 192.168.24.14]: FAILED! => {"ansible_facts": {"discovered_
sr/libexec/
h --cluster \"ceph\" osd stat -f json | jq '( (.num_in_osds) / (.num_osds) ) * 100'", "delta": "0:00:00.
664333", "end": "2020-06-05 20:09:43.389273", "msg": "non-zero return code", "rc": 5, "start": "2020-06-
05 20:09:42.724940", "stderr": "jq: error (at <stdin>:1): null (null) and null (null) cannot be divided"
, "stderr_lines": ["jq: error (at <stdin>:1): null (null) and null (null) cannot be divided"], "stdout":
"", "stdout_lines": []}
tags: | added: ussuri-backport-potential |
ceph health is actually fine
[root@oc0- controller- 0 ~]# podman exec ceph-mon-$HOSTNAME ceph -s 5cd6-4f1c- 8bc2-a37140ee09 a8
cluster:
id: a7c1c1e4-
health: HEALTH_WARN
too few PGs per OSD (8 < min 30)
services: 2,oc0-controlle r-0,oc0- controller- 1 (age 22h) 2(active, since 22h), standbys: oc0-controller-0, oc0-controller-1
mon: 3 daemons, quorum oc0-controller-
mgr: oc0-controller-
osd: 12 osds: 12 up (since 22h), 12 in (since 22h)
data:
pools: 3 pools, 96 pgs
objects: 0 objects, 0 B
usage: 12 GiB used, 588 GiB / 600 GiB avail
pgs: 96 active+clean
[root@oc0- controller- 0 ~]#