BDM is not deleted if an instance booted from volume and failed on schedule stage
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Compute (nova) |
Expired
|
Undecided
|
Unassigned |
Bug Description
Description
============
I did some test on boot from volume instance. I found that sometime the instance boot from volume will fail on evacuate operation. After some dig, I found evacuate operation failed due to the conductor service returned wrong block device mapping which has no connection info. After some more dig, I found there are some BDM should NOT exists because it belongs to a deleted instance. After some more test, I found a way to reproduce this problem.
Steps to reproduce
=======
1, create a volume from image (image-volume1)
2, stop or disable all nova-compute
3, boot an instance (bfv1) from volume (image-volume1)
4, wait the instance became ERROR state
5, delete the instance will just created
6, look at block_device_
7, boot another instance (bfv2) from volume (image-volume1)
8, execute evacuate operation on bfv2
9, evacuate operation failed and bfv2 became ERROR.
Environment
============
* centos 7
* liberty openstack
I looked at the master branch code. This bug still exists.
description: | updated |
@Jiajun Liu, I couldn't reproduce this bug.
I followed the above steps in devstack multi-node environment:
*Ubuntu
*master
1.Created a bootable volume(v1) from an image.
2.Stopped all compute services.
3.booted an instance(test1) with the volume created(v1) and the instance changed to error state.
4.deleted the instance.
5.restarted the compute services and booted another instance(test2) with v1.
6.executed evacuate on test2 and everything worked as expected.I didn't get the error.