Comment 4 for bug 1851287

Revision history for this message
Kevin Smith (kevin.smith.wrs) wrote :

Issue reproduced with a host-swact. No host-lock performed after swact, but pods would surely get stuck terminating if it was issued. Logs attached. Ceph continuously in below state.

controller-1:~$ ceph -s
  cluster:
    id: 427bf4e1-20f5-4c9a-a1f9-337796696e3a
    health: HEALTH_WARN
            Reduced data availability: 64 pgs inactive, 64 pgs peering

  services:
    mon: 3 daemons, quorum controller-0,controller-1,compute-0
    mgr: controller-1(active), standbys: controller-0
    osd: 2 osds: 2 up, 2 in

  data:
    pools: 1 pools, 64 pgs
    objects: 73.06 k objects, 283 GiB
    usage: 568 GiB used, 324 GiB / 892 GiB avail
    pgs: 100.000% pgs not active
             64 peering