ceph backend volume migration fails when in the same pool
Bug #1871524 reported by
xinliang
This bug affects 1 person
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Cinder |
Fix Released
|
Undecided
|
xinliang | ||
kolla-ansible |
Invalid
|
Undecided
|
Unassigned |
Bug Description
We want to disable/delete some volume ceph cinder_
Be it seems that migrating volume which in state available or in-use on the same ceph pool fails.
Tested on Rocky/Stein/Train all fail.
We deploy openstack with kolla-ansible(
[1] "If you plan to decommission a block storage node, you must stop the cinder volume service on the node after performing the migration." at https:/
migrate volume with cli and migration_status in error state
linaro@j12-d05:~$ openstack volume migrate --host j12-d05@rbd-1#rbd-1 6ff6e918- dedd-4da9- 8122-43d729987e 61 dedd-4da9- 8122-43d729987e 61 ------- ------- ------- -----+- ------- ------- ------- ------- ------- --+ ------- ------- ------- -----+- ------- ------- ------- ------- ------- --+ 08T02:14: 49.000000 | dedd-4da9- 8122-43d729987e 61 | host-attr: host | uk-dc-cavium- 07@rbd- 1#rbd-1 | mig-status- attr:migstat | error | mig-status- attr:name_ id | None | tenant- attr:tenant_ id | fe2459242c0f4fc f83e499d7ea2f5e ba | 08T02:52: 33.000000 | e8c3a8937d0a05c f2 | ------- ------- ------- -----+- ------- ------- ------- ------- ------- --+
linaro@j12-d05:~$ openstack volume show 6ff6e918-
+------
| Field | Value |
+------
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2020-04-
| description | None |
| encrypted | False |
| id | 6ff6e918-
| migration_status | error |
| multiattach | False |
| name | test-vol-03 |
| os-vol-
| os-vol-
| os-vol-
| os-vol-
| properties | |
| replication_status | None |
| size | 20 |
| snapshot_id | None |
| source_volid | None |
| status | available |
| type | None |
| updated_at | 2020-04-
| user_id | 1010a53f5266403
+------