volume migration command doens't function when sent from different machine

Bug #1255957 reported by Yogev Rabl
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Cinder
Opinion
Undecided
Unassigned

Bug Description

Description of problem:
while trying to validate an RFE about volume migration, https://blueprints.launchpad.net/cinder/+spec/volume-migration , the feature didn't function.
It failed to move the different volumes from one backend to another:
1. lvm to nfs
2. nfs to lvm
3. lvm to lvm

Version-Release number of selected component (if applicable):
openstack-cinder-2013.2-2.el6ost.noarch
python-cinderclient-1.0.6-2.el6ost.noarch
python-cinder-2013.2-2.el6ost.noarch

Red Hat Enterprise Linux Server release 6.5 (Santiago)

How reproducible:
evertime

Steps to Reproduce:
1. Configure the Cinder to run with multiple backend (with different types of backends). follow the documentation: http://docs.openstack.org/admin-guide-cloud/content//managing-volumes.html
2. migrate one of the volumes to the other backend.
3.

Actual results:
the volume failed to migrate

Expected results:
the volume have migrated

Additional info:

the log are attached.

Revision history for this message
Yogev Rabl (yrabl) wrote :
Revision history for this message
Avishay Traeger (avishay-il) wrote :

Can you give some indication of what commands you ran, what responses you got, config, ...?

Changed in cinder:
status: New → Incomplete
Revision history for this message
Yogev Rabl (yrabl) wrote :

NP:

First, I have 1 NFS backend and 2 LVM backends.
Second, I've created a volume in the NFS backend (usind the type...): cinder create --volume-type nfs 1. checked its host with cinder show:

+--------------------------------+-------------------------------------------+
| Property | Value |
+--------------------------------+-------------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| created_at | 2013-11-28T09:15:18.000000 |
| display_description | None |
| display_name | None |
| id | 3c0dd292-3266-444e-bb48-3b8b367b3757 |
| metadata | {} |
| os-vol-host-attr:host | yrabl-glance01.qe.lab.tlv.redhat.com@nfs1 |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 1d55b21cb22f46f7b4fff6cf4604205b |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | available |
| volume_type | nfs |
+--------------------------------+-------------------------------------------+

then run the migraiton:
#cinder migrate 3c0dd292-3266-444e-bb48-3b8b367b3757 <hostname>@lvm1

There was no response whether the action was a success or a failure in the CLI, there was an error in the logs.

Revision history for this message
Yogev Rabl (yrabl) wrote :

in addition I've ran all the commands from the nova machine and not locally from the cinder machine.

The entire topology is:
Machine 1: Nova, Neutron, Keystone, Swift.
Machine 2: Cinder.
Machine 3: Glance.

When I run the migrate command from the Cinder machine it works.

summary: - volume migration doens't function
+ volume migration command doens't function when sent from different
+ machine
Yogev Rabl (yrabl)
Changed in cinder:
status: Incomplete → New
Revision history for this message
Avishay Traeger (avishay-il) wrote :

>> There was no response whether the action was a success or a failure in the CLI, there was an error in the logs.

What error? I didn't see any in the Cinder logs that you attached. It doesn't really make sense that where you ran the command from influenced the success of the command.

Changed in cinder:
status: New → Incomplete
Revision history for this message
Yogev Rabl (yrabl) wrote :

sorry, I wrote it too quickly. There was no error in the log, that's one of the problems with this issue that I didn't had anything to say about it from the log perspective.

sorry, again, for the mistake.

Revision history for this message
Avishay Traeger (avishay-il) wrote :

Can we start from the beginning please, with this patch too?
https://review.openstack.org/#/c/59650/

Maybe for some reason Nova failed, and you didn't see an error, and it had nothing to do with where you ran from?

Please submit all of the commands that you ran with their output, and logs for Cinder and Nova. Thanks!

Changed in cinder:
assignee: nobody → Flavio Percoco (flaper87)
assignee: Flavio Percoco (flaper87) → nobody
Revision history for this message
Ben Swartzlander (bswartz) wrote :

I've seen this exact same issue. When you use a volume type to create you volume, and you later try to migrate it, the migration will silently do nothing if the scheduler decides that the volume type doesn't match the target backend. Please try it again without specifying a volume type.

tags: added: migration nfs
Revision history for this message
Jay Bryant (jsbryant) wrote :

Think sounds similar to the issues that I have had others report where they are using migration but not should be using retype as the volume type they are using requires a specific backend.

Yogev, have you tried recreating this again without using volume types as requested above?

Revision history for this message
Vincent Hou (houshengbo) wrote :

I simply put the new documents for how volume migration works here, which should absolutely be put the migration documentation to avoid the user confusion when they try to use volume migration.
Volume migration has common to do it and it has been used by both "cinder migrate" and "cinder retype" commands.

When to use cinder migrate?
If you would like to migrate the volume within the same volume type, the same backend name, but located on different machines.
For example, the volume backend name one machine 1(m-1) is lvmdriver-1 and it is lvmdriver-1 on the second machine(m2) too. Both of the two backends must belong to the same volume type.
If <volume-id> is on m-1, then, use command cinder migrate <volume-id> m-2#lvmdriver-1#lvmdriver-1 to migrate it from m-1 to m-2. This way works. DON'T use this command to move volume across different types or different back-ends.

When to use retype?
If you would like to migrate the volume across types or backends, this command is what you need to use.
For example, the volume backend name one machine 1(m-1) is lvmdriver-1 and it is storwize on the second machine(m2). The first belongs to type1 and the second belong to type2.
If you would like to move one volume on m-1 to m-2, you need to use cinder retype <volume-id> type2 --migration-policy on-demand.
This way works as well.

Folks, if you need to do migration. Find out which kind of migrations above you need to do, then do it the right way. I admin the volume migration so far is REALLY confusing. Wish you luck.

Vincent Hou (houshengbo)
Changed in cinder:
assignee: nobody → Vincent Hou (houshengbo)
Revision history for this message
Jay Bryant (jsbryant) wrote :

We should get the Cinder documentation updated with information like this. Where would be the right place to make that happen?

Revision history for this message
Sean McGinnis (sean-mcginnis) wrote :

Automatically unassigning due to inactivity.

Changed in cinder:
assignee: Vincent Hou (houshengbo) → nobody
Revision history for this message
Vincent Hou (houshengbo) wrote :

I believe that this bug does exist any longer.
Please check my comments above for the detailed information.
If you still feel this is a valid issue, feel free to reopen it.

Changed in cinder:
status: Incomplete → Invalid
Revision history for this message
Tyler North (ty-north) wrote :

I just hit a similar issue with trying to migrate between two seperate storage backends.

Is it possible for either:

A) The documentation be updated to reflect that migration should not be used between two storage backends

or

B) The cinder API is modified so that migration will automatically fail if trying to migrate to a new host that does not match the existing volumes backend?

Changed in cinder:
status: Invalid → Opinion
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.