Race condition in attaching/detaching volumes when compute manager is unreachable
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Compute (nova) |
Invalid
|
Medium
|
Nikola Đipanov |
Bug Description
When a compute manager is offline, or if it cannot pick up messages for some reason, a race condition exists in attaching/detaching volumes.
Try attach and detach a volume and then bring the compute manager online. Then the reserve_
1. The mountpoint is no longer be usable.
2. os-volume_
Steps to reproduce (This was recreated in Devstack with nova trunk 75af47a.)
1. Spawn an instance (Mine is a multinode devstack setup, so I spawn it to a different machine than the api, but the race condition should be reproducible in a single-node setup too)
2. Create a volume
3. Stop the compute manager (n-cpu)
4. Try to attach the volume to the instance, it should fail after a while
5. Try to detach the volume
6. List the volumes. The volume should be in 'available' state. Optionally you can delete it at this point
7. Check db for block_device_
8. Start compute manager on the node that the instance is running
9. Check db for block_device_
Changed in nova: | |
status: | Triaged → In Progress |
assignee: | nobody → Jason Dillaman (jdillaman) |
tags: | added: volumes |
Changed in nova: | |
status: | In Progress → Invalid |
That's for the detailed bug report!