Cannot retype volume on Ceph when cinder-volume services on different nodes with different volume types

Bug #1532884 reported by Yuriy Nesenenko
10
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Cinder
Fix Released
Undecided
Unassigned

Bug Description

Create an extra-specification to link the volume type to a back end name.
Let's say you have several services cinder-volume on the different nodes with Ceph.
Then, an extra-specification has to be created to link the volume type to a back end name. Run this command:
$ cinder --os-username admin --os-tenant-name admin type-key ceph set volume_backend_name=ceph

On other host create another volume type ceph2:
$ cinder --os-username admin --os-tenant-name admin type-create ceph2
$ cinder --os-username admin --os-tenant-name admin type-key ceph2 set volume_backend_name=ceph2

Also, on this host change ceph with ceph2 in the etc/cinder/cinder.conf
To list the extra-specifications, use this command:
$ cinder extra-specs-list
+---------------------------------------------------------+------------+---------------------------------------------------+
| ID | Name | extra_specs |
+----------------------------------------------------------+------------+--------------------------------------------------+
| 31f65ed1-fbfe-45ad-be67-d0aeb2372733 | ceph | {u'volume_backend_name': u'ceph'} |
| a00b6e98-65aa-424b-bc18-d893dda93669 | ceph2 | {u'volume_backend_name': u'ceph2'} |
+-----------------------------------------------------------+-----------+---------------------------------------------------+

Create volume and then try retype it:
$cinder retype 01e4ce5d-827d-4f06-af19-dbca81599b24 ceph2
where 01e4ce5d-827d-4f06-af19-dbca81599b24 is volume' id created on host with volume_backend_name =ceph

Could not find a host for volume 01e4ce5d-827d-4f06-af19-dbca81599b24 with type a00b6e98-65aa-424b-bc18-d893dda93669

summary: - Cannot retype volume when cinder-volume services on different nodes with
- different volume types
+ Cannot retype volume on Ceph when cinder-volume services on different
+ nodes with different volume types
Revision history for this message
Gorka Eguileor (gorka) wrote :

This looks like a configuration issue to me. The scheduler isn't finding the second node with the ceph2 backend.

And you'll need to use option "--migration-policy on-demand" to make sure the volume is migrated with the retype.

Revision history for this message
Lisa Li (lisali) wrote :

Yes, current retype mechanism is to check whether current volume host is available for new volume type. If yes, it completes by just changing volume type. If it needs to change volume host, it need "--migration-policy on-demand" specified.

https://github.com/openstack/cinder/blob/master/cinder/volume/manager.py#L2200

In this case, it is apparent that it needs to do data migration.

Revision history for this message
Lisa Li (lisali) wrote :

And no host found in scheduler when migration-policy is not specified: https://github.com/openstack/cinder/blob/master/cinder/scheduler/filter_scheduler.py#L158

Changed in cinder:
status: New → Invalid
Revision history for this message
Lisa Li (lisali) wrote :

I set this bug as invalid please reopen if you have concerns about current implementations.

Revision history for this message
Yuriy Nesenenko (ynesenenko) wrote :

If we use option "--migration-policy on-demand" we'll get:

$cinder retype --migration-policy on-demand 5643803e-ed2e-4ac6-ba1f-c5e666d013da ceph2
where 5643803e-ed2e-4ac6-ba1f-c5e666d013da is volume' id created on host with volume_backend_name =ceph

Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/os_brick/initiator/linuxrbd.py", line 94, in __init__
    try:
  File "/usr/lib/python2.7/dist-packages/rbd.py", line 372, in __init__
    raise make_ex(ret, 'error opening image %s at snapshot %s' % (name, snapshot))
ImageNotFound: error opening image volume-f9b00cba-cbdb-4665-9a1b-2eb7616b59a3 at snapshot None

Failed to copy volume 5643803e-ed2e-4ac6-ba1f-c5e666d013da to f9b00cba-cbdb-4665-9a1b-2eb7616b59a3

And volume get migration_status=Error

All of the above is true with regard to patch fix https://review.openstack.org/#/c/266180/

Changed in cinder:
status: Invalid → Confirmed
haobing1 (haobing1)
Changed in cinder:
status: Confirmed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.