lma-toolchain 0.9 is impacted since it reports Ceph cluster in WARNING state (MOS 8.0: deployment with 3 controllers, 3 OSDs, replica factor=3): ceph -s cluster ec475b45-5e0f-48a8-9f3a-3b3c5eda47ea health HEALTH_WARN too many PGs per OSD (480 > max 300) monmap e3: 3 mons at {node-4=192.168.0.7:6789/0,node-7=192.168.0.11:6789/0,node-9=192.168.0.9:6789/0} election epoch 8, quorum 0,1,2 node-4,node-9,node-7 osdmap e49: 4 osds: 4 up, 4 in pgmap v676: 640 pgs, 10 pools, 45745 kB data, 61 objects 6450 MB used, 1369 GB / 1375 GB avail 640 active+clean client io 9540 B/s rd, 8085 B/s wr, 9 op/s
A workaround would be to suppress this warning by disabling it in ceph.conf (or increase this value): mon_pg_warn_max_per_osd = 0
But apparently, it is up to ceph admins to (re)organize cluster after deployment for better performance on MOS 8.0
lma-toolchain 0.9 is impacted since it reports Ceph cluster in WARNING state (MOS 8.0: deployment with 3 controllers, 3 OSDs, replica factor=3): 5e0f-48a8- 9f3a-3b3c5eda47 ea 192.168. 0.7:6789/ 0,node- 7=192.168. 0.11:6789/ 0,node- 9=192.168. 0.9:6789/ 0}
election epoch 8, quorum 0,1,2 node-4, node-9, node-7
640 active+clean
ceph -s
cluster ec475b45-
health HEALTH_WARN
too many PGs per OSD (480 > max 300)
monmap e3: 3 mons at {node-4=
osdmap e49: 4 osds: 4 up, 4 in
pgmap v676: 640 pgs, 10 pools, 45745 kB data, 61 objects
6450 MB used, 1369 GB / 1375 GB avail
client io 9540 B/s rd, 8085 B/s wr, 9 op/s
A workaround would be to suppress this warning by disabling it in ceph.conf (or increase this value): mon_pg_ warn_max_ per_osd = 0
But apparently, it is up to ceph admins to (re)organize cluster after deployment for better performance on MOS 8.0