Comment 8 for bug 1768231

Revision history for this message
Shivashankar C R (shivashankarcr) wrote :

Hi,

I have hit the same issue:

2018-07-27 12:44:08.174 3194 INFO cinder.scheduler.base_filter [req-24837425-6f45-42bd-8f41-46ac6e2ca530 073c02c1926b4d22bc8ac73f97b26305 3d0752c33c384e34be6feb41020c9e83 - default default] Filtering removed all hosts for the request with volume ID '4ffcb859-37a9-4b5e-a302-01f3dd74b678'. Filter results: AvailabilityZoneFilter: (start: 0, end: 0), CapacityFilter: (start: 0, end: 0), CapabilitiesFilter: (start: 0, end: 0)
2018-07-27 12:44:08.175 3194 WARNING cinder.scheduler.filter_scheduler [req-24837425-6f45-42bd-8f41-46ac6e2ca530 073c02c1926b4d22bc8ac73f97b26305 3d0752c33c384e34be6feb41020c9e83 - default default] No weighed backend found for volume with properties: None
2018-07-27 12:44:08.176 3194 INFO cinder.message.api [req-24837425-6f45-42bd-8f41-46ac6e2ca530 073c02c1926b4d22bc8ac73f97b26305 3d0752c33c384e34be6feb41020c9e83 - default default] Creating message record for request_id = req-24837425-6f45-42bd-8f41-46ac6e2ca530
2018-07-27 12:44:08.185 3194 ERROR cinder.scheduler.flows.create_volume [req-24837425-6f45-42bd-8f41-46ac6e2ca530 073c02c1926b4d22bc8ac73f97b26305 3d0752c33c384e34be6feb41020c9e83 - default default] Failed to run task cinder.scheduler.flows.create_volume.ScheduleCreateVolumeTask;volume:create: No valid backend was found. No weighed backends available: NoValidBackend: No valid backend was found. No weighed backends available

Ceph-logs:
root@ubuntu-control1:~# lxc-attach -n infra1_ceph-mon_container-6d1a7907
root@infra1-ceph-mon-container-6d1a7907:/# ceph status || ceph -w
  cluster:
    id: 88990417-2535-4b03-bee6-8b398ce234d3
    health: HEALTH_WARN
            Reduced data availability: 13 pgs inactive
            Degraded data redundancy: 32 pgs undersized
            too few PGs per OSD (13 < min 30)

  services:
    mon: 1 daemons, quorum infra1-ceph-mon-container-6d1a7907
    mgr: infra1-ceph-mon-container-6d1a7907(active)
    osd: 3 osds: 3 up, 3 in; 27 remapped pgs

  data:
    pools: 5 pools, 40 pgs
    objects: 0 objects, 0 bytes
    usage: 322 MB used, 2762 GB / 2763 GB avail
    pgs: 32.500% pgs not active
             19 active+undersized+remapped
             13 undersized+peered
             8 active+clean

root@infra1-ceph-mon-container-6d1a7907:/#

Is there any workaround to proceed?

Thanks!