volume migration requires #xxx part in destination host specification

Bug #1403902 reported by Yuriy Taraday
14
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Cinder
Fix Released
High
Unassigned

Bug Description

When I run "cinder service-list", I get "Host" column with names like "host1@lvmdriver-1", but Cinder's scheduler selects host to migrate by HostState's "host" attribute which also includes "#lvmdriver-1" part.

We should either explicitly state that we need "#lvmdriver-1" part in host specification and add it to "cinder service-list" output or ignore these fragments in scheduler.

Steps to reproduce:
$ cinder create --name vol1 1
$ cinder migrate vol1 somehost@lvmdriver-1

Expected result:
Migration gets scheduled to somehost

Actual result:
In nova-scheduler logs:
Failed to schedule_migrate_volume_to_host: No valid host was found. Cannot place volume xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx on somehost@lvmdriver-1

Tags: migration
description: updated
Revision history for this message
John Griffith (john-griffith) wrote :

I assume you meant s/nova-scheduler/cinder-scheduler/

Changed in cinder:
status: New → Confirmed
importance: Undecided → High
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to cinder (master)

Fix proposed to branch: master
Review: https://review.openstack.org/143234

Changed in cinder:
assignee: nobody → Mitsuhiro Tanino (mitsuhiro-tanino)
status: Confirmed → In Progress
Revision history for this message
Nikesh Kumar Mahalka (nikeshmahalka) wrote :

$ cinder extra-specs-list
+--------------------------------------+---------+------------------------------------------+
| ID | Name | extra_specs |
+--------------------------------------+---------+------------------------------------------+
| 4b8217ed-2320-4118-924a-472a01be9fe4 | lvm-4 | {u'volume_backend_name': u'lvmdriver-4'} |
| ae3b8e2b-6913-4116-88ef-59519a1ba2a9 | lvm-3 | {u'volume_backend_name': u'lvmdriver-3'} |
+--------------------------------------+---------+------------------------------------------+

$ cinder service-list
+------------------+---------------------------------+------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+------------------+---------------------------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | juno-devstack-controller | nova | enabled | up | 2014-12-20T18:05:40.000000 | None |
| cinder-volume | juno-devstack-block@lvmdriver-3 | nova | enabled | up | 2014-12-20T18:06:04.000000 | None |
| cinder-volume | juno-devstack-block@lvmdriver-4 | nova | enabled | up | 2014-12-20T18:06:04.000000 | None |
+------------------+---------------------------------+------+---------+-------+----------------------------+-----------------+

when i am creating a volume without volume type(default volume host for me is juno-devstack-block@lvmdriver-3#lvmdriver-3 and default volume type for me is lvm-3) and trying migration to volume host juno-devstack-block@lvmdriver-4#lvmdriver-4,it is working for me

But when i am creating a volume with volume type lvm-3 or lvm-4 and trying to migrate to other volume host,it is failing by showing " No valid host was found. Cannot place volume xxxxx on juno-devstack-block@lvmdriver-x#lvmdriver-x" in cinder-scheduler log.

Revision history for this message
Mike Perez (thingee) wrote :

As mentioned in the discussion in the patch review, this is by design. Release note update to follow.

Changed in cinder:
status: In Progress → Invalid
Revision history for this message
Yuriy Taraday (yorik-sar) wrote :

I think this bug should be fixed in one of ways:
- either code should be changed to make 'cider migrate' accept what 'cinder service-list' provides;
- or 'cinder service-list' should include #xxx pool part in its output;
- or documentation should be changed accordingly to allow user to determine how to migrate volumes.

Otherwise in current state I (with user hat on) can't possibly migrate volumes without looking into debug logs and code with pdb. This makes this bug valid.

Changed in cinder:
status: Invalid → New
Revision history for this message
Yuriy Taraday (yorik-sar) wrote :

Note that for last option to work we need some extra API and CLI calls to determine list of available pools.

Revision history for this message
Mitsuhiro Tanino (mitsuhiro-tanino) wrote :

>>- or 'cinder service-list' should include #xxx pool part in its output;

An API which shows pool information was already suppoted at commit https://review.openstack.org/#/c/119938/.
For more detail, please see the commit. However, this API is not in API guide, so the document should be updated.
(http://developer.openstack.org/api-ref-compute-v2.html)

Also I will try to add CLI command "cinder get-pools" which shows pool information using the API which was added commit 119938.
I think this CLI also closes the gap for end users.

Revision history for this message
Mitsuhiro Tanino (mitsuhiro-tanino) wrote :
Changed in cinder:
status: New → In Progress
Revision history for this message
Mitsuhiro Tanino (mitsuhiro-tanino) wrote :

Hi Yuriy,

These fixes are proposed to solve this bug.

   (a) Allow the w/o pool format for "migrate" and "manage" commands
       https://review.openstack.org/#/c/143234/

   (b) Modify help docstring of "cinder migrate". The #pool is not mentioned in the help doc string.
       https://review.openstack.org/#/c/145064/

   (c) Add "cinder get-pools" command(for Admin only) using API.
       https://review.openstack.org/#/c/144814/

   (d) Documentation
        https://wiki.openstack.org/wiki/ReleaseNotes/Juno#Limitations.2FKnown_Issues

Please feel free to comment for these fixes.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on cinder (master)

Change abandoned by Mike Perez (<email address hidden>) on branch: master
Review: https://review.openstack.org/143234
Reason: inactive for a month

Mike Perez (thingee)
Changed in cinder:
status: In Progress → Confirmed
assignee: Mitsuhiro Tanino (mitsuhiro-tanino) → nobody
Jay Bryant (jsbryant)
tags: added: migration
Changed in cinder:
status: Confirmed → In Progress
status: In Progress → Confirmed
Revision history for this message
Mitsuhiro Tanino (mitsuhiro-tanino) wrote :

I believe these fixes solved this issue. If you still have a problem, please reopen this.

(a) Add "cinder get-pools" command(for Admin only) using API.
    https://review.openstack.org/#/c/144814/

(b) Expose cinder's scheduler pool API
    https://review.openstack.org/#/c/140142/

(c) Documentation
    https://wiki.openstack.org/wiki/ReleaseNotes/Juno#Limitations.2FKnown_Issues

Changed in cinder:
status: Confirmed → Fix Committed
Thierry Carrez (ttx)
Changed in cinder:
milestone: none → kilo-rc1
status: Fix Committed → Fix Released
Thierry Carrez (ttx)
Changed in cinder:
milestone: kilo-rc1 → 2015.1.0
Changed in openstack-ansible:
milestone: none → 11.0.2
importance: Undecided → High
no longer affects: openstack-ansible
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.