unable to delete migrated volume with SolidFire as the destination
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Cinder |
Fix Released
|
Undecided
|
John Griffith |
Bug Description
The volume migration code does some things under the covers with UUID and Name swapping that is a bit tricky. The biggest challenge is that it creates a new volume (the target) with its' own UUID and db entry, then attaches the volume, migrates the data, then does a completion routine.
The completion routine is now moved into the object code, and that's good for the most part, but the object code also does a number of internal object things with names and ID's to try and keep track of what's what. In the case of a volume migrated from LVM (or some other backend) to SolidFire things look something like this:
cinder create --volume-type lvm 1
volume.id = 1ae84477-
volume.name = volume-
volume.name_id = 1ae84477-
cinder retype 1ae84477-
We get a new volume:
volume.id = 2730131b-
volume.name = volume-
Data is copied and we start the ID swapping magic and delete the original volume after
swapping the ID. The end result is something like this:
volume.id = 1ae84477-
The problem is depending on which side of a migration you're on (destination or source) the valid UUID that was used and recorded for creation is different. That means that in the one case the volume.id is valid, but in the case of a volume that was migrated off of the SF backend, it won't be able to be deleted because the id swap so it' needs the old name_id.
Basically, the whole migration thing is a bit flawed, we should probably revisit how it works at some point. For now, just add logic to SolidFire to try both values in delete to cover all of our bases since UUID is no longer really a UUID.
Fix proposed to branch: master /review. openstack. org/522410
Review: https:/