[wrsroot@controller-0 ~(keystone_admin)]$ system host-list +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ | 1 | controller-0 | controller | unlocked | enabled | available | | 2 | compute-0 | worker | unlocked | enabled | available | | 3 | compute-1 | worker | unlocked | enabled | available | | 6 | storage-1 | storage | unlocked | enabled | available | | 7 | storage-0 | storage | unlocked | enabled | available | | 12 | controller-1 | controller | unlocked | enabled | available | +----+--------------+-------------+----------------+-------------+--------------
[wrsroot@controller-0 ~(keystone_admin)]$ ceph -s cluster: id: 85261611-1245-4b21-bd0b-cc9afdc26cff health: HEALTH_OK
services: mon: 3 daemons, quorum controller-0,controller-1,storage-0 mgr: controller-0(active), standbys: controller-1 osd: 2 osds: 2 up, 2 in rgw: 1 daemon active
data: pools: 9 pools, 856 pgs objects: 1.59 k objects, 825 MiB usage: 1.8 GiB used, 890 GiB / 892 GiB avail pgs: 856 active+clean
io: client: 360 KiB/s wr, 0 op/s rd, 68 op/s wr
[wrsroot@ controller- 0 ~(keystone_admin)]$ system host-list ------- ------+ ------- ------+ ------- ------- --+---- ------- --+---- ------- ---+ ------- ------+ ------- ------+ ------- ------- --+---- ------- --+---- ------- ---+ ------- ------+ ------- ------+ ------- ------- --+---- ------- --+---- ------- ---
+----+-
| id | hostname | personality | administrative | operational | availability |
+----+-
| 1 | controller-0 | controller | unlocked | enabled | available |
| 2 | compute-0 | worker | unlocked | enabled | available |
| 3 | compute-1 | worker | unlocked | enabled | available |
| 6 | storage-1 | storage | unlocked | enabled | available |
| 7 | storage-0 | storage | unlocked | enabled | available |
| 12 | controller-1 | controller | unlocked | enabled | available |
+----+-
[wrsroot@ controller- 0 ~(keystone_admin)]$ ceph -s 1245-4b21- bd0b-cc9afdc26c ff
cluster:
id: 85261611-
health: HEALTH_OK
services: 0,controller- 1,storage- 0 0(active) , standbys: controller-1
mon: 3 daemons, quorum controller-
mgr: controller-
osd: 2 osds: 2 up, 2 in
rgw: 1 daemon active
data:
pools: 9 pools, 856 pgs
objects: 1.59 k objects, 825 MiB
usage: 1.8 GiB used, 890 GiB / 892 GiB avail
pgs: 856 active+clean
io:
client: 360 KiB/s wr, 0 op/s rd, 68 op/s wr