Manage volume returns with "No valid backend was found"

Bug #1652811 reported by Erlon R. Cruz
26
This bug affects 5 people
Affects Status Importance Assigned to Milestone
Cinder
Fix Released
High
Gorka Eguileor

Bug Description

After 'Cosmetic changes to scheduler'[1], if we try to manage a volume using the following procedure, we end up in a error due "No valid backend was found" while the backend is present:

$ cinder-manage host list
105-cinder-plugins nova
105-cinder-plugins@nfs nova
105-cinder-plugins@ceph nova
105-cinder-plugins@lvm nova
105-cinder-plugins@nfs1 nova
105-cinder-plugins@nfs2 nova
105-cinder-plugins@hnas-nfs nova

$ cinder manageable-list 105-cinder-plugins@lvm
+------------------------------------------------------------------+------+----------------+-----------------+-----------+------------+
| reference | size | safe_to_manage | reason_not_safe | cinder_id | extra_info |
+------------------------------------------------------------------+------+----------------+-----------------+-----------+------------+
| {u'source-name': u'volume-a8002db0-bc11-41e1-903c-29aae5e6257d'} | 3 | True | - | - | - |
+----------------------------------------------------

$ cinder manage --volume-type lvm 105-cinder-plugins@lvm volume-a8002db0-bc11-41e1-903c-29aae5e6257d
+--------------------------------+--------------------------------------+
| Property | Value |
+--------------------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2016-12-27T17:04:26.000000 |
| description | None |
| encrypted | False |
| id | dd37d2d6-5558-44e0-9ff9-7a86acfdb87f |
| metadata | {} |
| migration_status | None |
| multiattach | False |
| name | None |
| os-vol-host-attr:host | 105-cinder-plugins@lvm |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | bc752faf23834ca98641fe183eaa581c |
| replication_status | None |
| size | 0 |
| snapshot_id | None |
| source_volid | None |
| status | error |
| updated_at | 2016-12-27T17:04:26.000000 |
| user_id | 731db462453e4a42999f4b0cf6d73a35 |
| volume_type | lvm |
+--------------------------------+--------------------------------------+

In scheduler logs (added some extra prints):

Filter CapabilitiesFilter returned 1 host(s) from (pid=16623) get_filtered_objects /opt/stack/cinder/cinder/scheduler/base_filter.py:132
2016-12-27 15:17:45.846 DEBUG cinder.scheduler.filter_scheduler [req-6be5224f-d78f-4eef-9fa1-9a6fef124923 731db462453e4a42999f4b0cf6d73a35 bc752faf23834ca98641fe183eaa581c] Filtered [host '105-cinder-plugins@lvm#lvm': free_capacity_gb: 7.01, pools: None] from (pid=16623) _get_weighted_candidates /opt/stack/cinder/cinder/scheduler/filter_scheduler.py:345
2016-12-27 15:17:45.848 DEBUG cinder.scheduler.filter_scheduler [req-6be5224f-d78f-4eef-9fa1-9a6fef124923 731db462453e4a42999f4b0cf6d73a35 bc752faf23834ca98641fe183eaa581c] backend_passes_filters: backend_state.backend_id = 105-cinder-plugins@lvm#lvm,backend = 105-cinder-plugins@lvm from (pid=16623) backend_passes_filters /opt/stack/cinder/cinder/scheduler/filter_scheduler.py:139
2016-12-27 15:17:45.850 ERROR cinder.scheduler.manager [req-6be5224f-d78f-4eef-9fa1-9a6fef124923 731db462453e4a42999f4b0cf6d73a35 bc752faf23834ca98641fe183eaa581c] Failed to schedule_manage_existing: No valid backend was found. Cannot place volume d853ac6c-a806-4480-bdac-151f37f2a15a on 105-cinder-plugins@lvm

[1] https://review.openstack.org/#/c/346041/

Gorka Eguileor (gorka)
Changed in cinder:
assignee: nobody → Gorka Eguileor (gorka)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to cinder (master)

Fix proposed to branch: master
Review: https://review.openstack.org/421348

Changed in cinder:
assignee: Gorka Eguileor (gorka) → Erlon R. Cruz (sombrafam)
status: New → In Progress
Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Fix proposed to branch: master
Review: https://review.openstack.org/421438

Changed in cinder:
assignee: Erlon R. Cruz (sombrafam) → Gorka Eguileor (gorka)
Changed in cinder:
importance: Undecided → High
milestone: none → ocata-3
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on cinder (master)

Change abandoned by Erlon R. Cruz (<email address hidden>) on branch: master
Review: https://review.openstack.org/421348
Reason: Abandoning in favor of: https://review.openstack.org/#/c/421438/

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to cinder (master)

Reviewed: https://review.openstack.org/421438
Committed: https://git.openstack.org/cgit/openstack/cinder/commit/?id=03ad26cdaf0a2e2fb0385ebadf85a4d42a785e8f
Submitter: Jenkins
Branch: master

commit 03ad26cdaf0a2e2fb0385ebadf85a4d42a785e8f
Author: Gorka Eguileor <email address hidden>
Date: Tue Jan 17 16:56:42 2017 +0100

    Fix volume manage

    After moving the manage operation to support Active/Active the scheduler
    no longer coud find the backend to place managed volumes.

    The issue comes from the fact that the user can request managing a
    volume just with the host, but if that backend is in a cluster the
    volume in the DB needs to be created with it, so in the API we create
    the volume using the service's host and cluster_name, but those are
    missing the pool information.

    The solution requires that the scheduler is capable of matching the
    backends even if we are searching for a backend without the pool as well
    as the scheduler updating the volume DB entry to include the pool.

    Co-Authored-By: Erlon R. Cruz <email address hidden>
    Closes-bug: #1652811
    Change-Id: I6110c0f93097f4e8c2b5fbc934924e3af1ec431b

Changed in cinder:
status: In Progress → Fix Released
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/cinder 10.0.0.0b3

This issue was fixed in the openstack/cinder 10.0.0.0b3 development milestone.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.