here's the output of the above commands:
sudo ceph mgr module ls:
MODULE balancer on (always on) crash on (always on) devicehealth on (always on) orchestrator on (always on) pg_autoscaler on (always on) progress on (always on) rbd_support on (always on) status on (always on) telemetry on (always on) volumes on (always on) iostat on nfs on restful on rook on alerts - influx - insights - localpool - mirroring - osd_perf_query - osd_support - prometheus - selftest - snap_schedule - stats - telegraf - test_orchestrator - zabbix -
sudo ceph status:
cluster: id: b37a0c92-19c5-11ee-b1f5-8b04eccdb717 health: HEALTH_OK
services: mon: 3 daemons, quorum juju-01d100-1,juju-01d100-0,juju-01d100-2 (age 23m) mgr: juju-01d100-1(active, since 4m), standbys: juju-01d100-0, juju-01d100-2 osd: 3 osds: 3 up (since 15m), 3 in (since 15m)
data: pools: 1 pools, 1 pgs objects: 2 objects, 449 KiB usage: 62 MiB used, 5.9 GiB / 6.0 GiB avail pgs: 1 active+clean
Output is identical on Kinetic
here's the output of the above commands:
sudo ceph mgr module ls:
MODULE
balancer on (always on)
crash on (always on)
devicehealth on (always on)
orchestrator on (always on)
pg_autoscaler on (always on)
progress on (always on)
rbd_support on (always on)
status on (always on)
telemetry on (always on)
volumes on (always on)
iostat on
nfs on
restful on
rook on
alerts -
influx -
insights -
localpool -
mirroring -
osd_perf_query -
osd_support -
prometheus -
selftest -
snap_schedule -
stats -
telegraf -
test_orchestrator -
zabbix -
sudo ceph status:
cluster: 19c5-11ee- b1f5-8b04eccdb7 17
id: b37a0c92-
health: HEALTH_OK
services: 1,juju- 01d100- 0,juju- 01d100- 2 (age 23m) 1(active, since 4m), standbys: juju-01d100-0, juju-01d100-2
mon: 3 daemons, quorum juju-01d100-
mgr: juju-01d100-
osd: 3 osds: 3 up (since 15m), 3 in (since 15m)
data:
pools: 1 pools, 1 pgs
objects: 2 objects, 449 KiB
usage: 62 MiB used, 5.9 GiB / 6.0 GiB avail
pgs: 1 active+clean
Output is identical on Kinetic