Volume is not detached when deleted VM was in error state
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Compute (nova) |
In Progress
|
Low
|
Ameed Ashour |
Bug Description
When creating a VM with multiple volumes - from a snapshot - if quota is exceeded nova will fail to launch the VM. When the VM is deleted the volumes created are not detached - attached to None.
Steps to reproduce:
1. Create a VM (creating a new volume) and attach 2 volumes to it.
2. Create a snapshot of the VM.
3. Set the volume quota to 5.
4. Create a new VM based on the snapshot from step 2.
5. VM will get into error state but when it gets removed volumes will still be associated with 'None'.
Expected:
Instance deleted and volume detached.
Actual:
Instance deleted and volume attached to 'None'.
In our deployment - which is stable/ocata - we had an issue with the bug #1668267. Maybe this is related to this.
For sake of completeness, we are running Openstack with ceph.
tags: | added: openstack-version.ocata volumes |
Changed in nova: | |
status: | New → Confirmed |
importance: | Undecided → Low |
Changed in nova: | |
assignee: | nobody → Ameed Ashour (ameeda) |
Changed in nova: | |
status: | Confirmed → In Progress |
So if I'm following correct, you have 3 volumes when you snapshot the instance, then set volume quota to 5, create another instance from the snapshot image which has 3 image-defined BDMs in it.
Since it's boot from volume, nova-compute is what creates the 3 volumes from volume snapshots. 2 of those volumes would get created and the 3rd would be rejected by cinder because of over quota. That makes the _prep_block_devices method in the compute service fail and set the instance to ERROR state. I'm not sure if 2 of those 3 volumes would be attached or not, and I guess that's the bug. Maybe they aren't in a state that compute knows how to properly clean them up. It probably also matters what the delete_ on_termination value is in those bdms to see if compute would remove them.