openstack volume migrate with ceph rbd backend fails and deleted the volume in ceph
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Cinder |
New
|
Undecided
|
Jon Bernard |
Bug Description
Hello team,
In openstack train version when you try and perform "openstack volume migrate" to a volume that is not attached to any server and you use ceph rbd as backend then the action fails but also deletes the volume from Ceph
Example volume with id:
openstack volume show d0fbafc7-
+------
| Field | Value |
+------
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2020-02-
| description | |
| encrypted | False |
| id | d0fbafc7-
| migration_status | None |
| multiattach | False |
| name | test-migrate2 |
| os-vol-
| os-vol-
| os-vol-
| os-vol-
| properties | |
| replication_status | None |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | available |
| type | StandardDisk |
| updated_at | 2020-02-
| user_id | aedbd9fe0c10463
+------
From Ceph:
rbd info cinder-
rbd image 'volume-
size 1 GiB in 128 objects
order 23 (8 MiB objects)
id: 351ce56b8b4567
block_name_prefix: rbd_data.
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
op_features:
flags:
create_timestamp: Thu Feb 20 15:05:19 2020
openstack volume migrate --host xxhost2-
--
Doing openstack volume show again the request fails
openstack volume show d0fbafc7-
+------
| Field | Value |
+------
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2020-02-
| description | |
| encrypted | False |
| id | d0fbafc7-
| migration_status | error |
| multiattach | False |
| name | test-migrate2 |
| os-vol-
| os-vol-
| os-vol-
| os-vol-
| properties | |
| replication_status | None |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | available |
| type | StandardDisk |
| updated_at | 2020-02-
| user_id | aedbd9fe0c10463
+------
Checking from Ceph again:
rbd info cinder-
rbd: error opening image volume-
Error i get from destination cinder host:
Feb 20 15:04:13 xxhost2-
Feb 20 15:04:13 xxhost2-
Has anyone seen anything similar? I know cinder migrate is not supported for ceph backend but this is a serious bug.
tags: | added: drivers migrate rbd |
Changed in cinder: | |
assignee: | nobody → Jon Bernard (jbernard) |
Hi Anastasios,
Could you tell me how you have deployed those 2 cinder-volume services?
Are they running as Active-Active for the same backend?
Are they just 2 different volume services configured to use the same RBD pool?
I'm asking because this looks to me like you forcefully migrated the volume from one cinder-volume host to another, but in reality the RBD pool was the same, so when the driver tries to create the volume on the destination it fails because it already exists (it's the origin volume), and then proceeds to delete what it thinks is the "destination volume" (it's the origin as well) as part of the failure cleanup code.
Could you confirm, please?
Cheers,
Gorka.