This looks strange as ceph-mon started reporting about active_clean state much more earlier: 2015-03-08T11:42:49.560464 node-1 ./remote/node-1.test.domain.local/ceph-mon.log:2015-03-08T11:42:49.560464+00:00 emerg: 2015-03-08 11:42:49.569606 7fcf613a9700 0 log [INF] : p gmap v28: 1728 pgs: 89 inactive, 288 peering, 1351 active+clean; 0 bytes data, 12543 MB used, 283 GB / 296 GB avail
So it is unknown why cinder-scheduler had been misinformed about free/total space =0 just few seconds later
This looks strange as ceph-mon started reporting about active_clean state much more earlier: 08T11:42: 49.560464 node-1 ./remote/ node-1. test.domain. local/ceph- mon.log: 2015-03- 08T11:42: 49.560464+ 00:00 emerg: 2015-03-08 11:42:49.569606 7fcf613a9700 0 log [INF] : p
2015-03-
gmap v28: 1728 pgs: 89 inactive, 288 peering, 1351 active+clean; 0 bytes data, 12543 MB used, 283 GB / 296 GB avail
So it is unknown why cinder-scheduler had been misinformed about free/total space =0 just few seconds later