Detach volume from instance sometimes does not work well
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Compute (nova) |
New
|
Undecided
|
Unassigned |
Bug Description
Description
=======
From the Horizon UI perspective:
Detach volume from server, it shows success popup message, it starts detaching (detaching loading bar), it takes a few minutes, detaching is finished but the volume is still attached. (it is happening only sometimes, roughly once in 5 runs)
Checked also status of Volume using OpenstackSDK and the Volume is still attached. So it is not only a Horizon issue.
See video in Logs & Configs part.
Steps to reproduce
==================
1) Create Instance
2) Create Volume
3) Created Volume -> Manage Attachments -> Attach volume to created instance.
4) Volume attached to Instance
5) Created Volume -> Manage Attachments -> Detach volume.
Expected result
===============
6) The volume should be Available again and not attached.
Actual result
=============
6) The volume is still in-use and attached to the instance.
Environment
===========
Zuul
Logs & Configs
==============
https:/
Video recording from test (0:30 success popup message and status set from "In-use" to "Detaching", 4:02 status set from "Detaching" to "In-use" again):
https:/
Hey, the current nova master branch work as expected in CLI and dashboard. can you please tell how did you created volume.
the attached video and logs shows its an automation, 1d69-45bc- 901a-0d4a970fbf d8) deleted immediately after creation at 2:55 https:/ /46065918d90ec3 3ce999- 76c305bffd22578 a75e29d73d2b2fb 28.ssl. cf1.rackcdn. com/910378/ 3/check/ horizon- integration- pytest/ f0ab36d/ controller/ logs/screen- n-cpu.txt
from this log VM (80745088-
instance-id: 6a0a909d- 4f57-4b22- 9c45-eefcff748c 9b dbf2-484a- 8ce7-8a19e8a311 5d
vol-id: 28772a87-
Nova logs: Nova did asked for detach vol and retied till 8 times. libvirt. driver [None req-b726265d- ff18-4879- 92c9-5f9875e528 8b demo demo] Waiting for libvirt event about the detach of device vdb with device alias ua-28772a87- dbf2-484a- 8ce7-8a19e8a311 5d from instance 6a0a909d- 4f57-4b22- 9c45-eefcff748c 9b is timed out.
WARNING nova.virt.
DEBUG nova.virt. libvirt. driver [None req-b726265d- ff18-4879- 92c9-5f9875e528 8b demo demo] Failed to detach device vdb with device alias ua-28772a87- dbf2-484a- 8ce7-8a19e8a311 5d from instance 6a0a909d- 4f57-4b22- 9c45-eefcff748c 9b from the live domain config. Libvirt did not report any error but the device is still in the config. {{(pid=98861) _detach_ from_live_ with_retry /opt/stack/ nova/nova/ virt/libvirt/ driver. py:2599} }
ERROR nova.virt. libvirt. driver [None req-b726265d- ff18-4879- 92c9-5f9875e528 8b demo demo] Run out of retry while detaching device vdb with device alias ua-28772a87- dbf2-484a- 8ce7-8a19e8a311 5d from instance 6a0a909d- 4f57-4b22- 9c45-eefcff748c 9b from the live domain config. Device is still attached to the guest.
nova.exception. DeviceDetachFai led: Device detach failed for vdb: Run out of retry while detaching device vdb with device alias ua-28772a87- dbf2-484a- 8ce7-8a19e8a311 5d from instance 6a0a909d- 4f57-4b22- 9c45-eefcff748c 9b from the live domain config. Device is still attached to the guest.
No detaching logs in c-api and c-vol. c-api do have attachments logs and no req for deataching vol.
I need to look deeper, for issue, may be later, but I think this should be a cinder bug,