Volume "in-use" although VM doesn't exist

Bug #1201418 reported by Marc Koderer
16
This bug affects 3 people
Affects Status Importance Assigned to Milestone
Cinder
Invalid
Undecided
Unassigned
OpenStack Compute (nova)
Invalid
Undecided
Unassigned

Bug Description

Setup:

devstack on master using default settings.

Steps:

  1) Using tempest/stress with patch https://review.openstack.org/#/c/36652/:
  cd /opt/stack/tempest/tempest/stress
  ./run_stress.py etc/volume-assign-delete-test.json -d 60
  2) Test will do the following work flow:
       - create a volume
       - create a VM
       - attach volume to VM
       - delete VM
       - delete volume

Problem:

Deletion of volume causes problem, since the state is still "in-use" even the VM is already deleted:

2013-07-15 12:30:58,563 31273 tempest.stress : INFO creating volume: volume663095989
2013-07-15 12:30:59,992 31273 tempest.stress : INFO created volume: cb4d625c-c4d8-43ee-9bdd-d4fa4e1d2c60
2013-07-15 12:30:59,993 31273 tempest.stress : INFO creating vm: instance331154488
2013-07-15 12:31:11,097 31273 tempest.stress : INFO created vm 4e20442b-8f72-482d-9e7c-59725748784b
2013-07-15 12:31:11,098 31273 tempest.stress : INFO attach volume (cb4d625c-c4d8-43ee-9bdd-d4fa4e1d2c60) to vm 4e20442b-8f72-482d-9e7c-59725748784b
2013-07-15 12:31:11,265 31273 tempest.stress : INFO volume (cb4d625c-c4d8-43ee-9bdd-d4fa4e1d2c60) attached to vm 4e20442b-8f72-482d-9e7c-59725748784b
2013-07-15 12:31:11,265 31273 tempest.stress : INFO deleting vm: instance331154488
2013-07-15 12:31:13,780 31273 tempest.stress : INFO deleted vm: 4e20442b-8f72-482d-9e7c-59725748784b
2013-07-15 12:31:13,781 31273 tempest.stress : INFO deleting volume: cb4d625c-c4d8-43ee-9bdd-d4fa4e1d2c60
Process Process-1:
Traceback (most recent call last):
  File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
    self.run()
  File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
    self._target(*self._args, **self._kwargs)
  File "/opt/stack/tempest/tempest/stress/actions/volume_attach_delete.py", line 61, in create_delete
    resp, _ = manager.volumes_client.delete_volume(volume['id'])
  File "/opt/stack/tempest/tempest/services/volume/json/volumes_client.py", line 86, in delete_volume
    return self.delete("volumes/%s" % str(volume_id))
  File "/opt/stack/tempest/tempest/common/rest_client.py", line 264, in delete
    return self.request('DELETE', url, headers)
  File "/opt/stack/tempest/tempest/common/rest_client.py", line 386, in request
    resp, resp_body)
  File "/opt/stack/tempest/tempest/common/rest_client.py", line 436, in _error_checker
    raise exceptions.BadRequest(resp_body)
BadRequest: Bad request
Details: {u'badRequest': {u'message': u'Invalid volume: Volume status must be available or error', u'code': 400}}
2013-07-15 12:31:58,622 31264 tempest.stress : INFO cleaning up

nova list:
+----+------+--------+------------+-------------+----------+
| ID | Name | Status | Task State | Power State | Networks |
+----+------+--------+------------+-------------+----------+
+----+------+--------+------------+-------------+----------+

cinder list
+--------------------------------------+--------+------------------+------+-------------+----------+--------------------------------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+--------+------------------+------+-------------+----------+--------------------------------------+
| cb4d625c-c4d8-43ee-9bdd-d4fa4e1d2c60 | in-use | volume663095989 | 1 | None | False | 4e20442b-8f72-482d-9e7c-59725748784b |
+--------------------------------------+--------+------------------+------+-------------+----------+--------------------------------------+

Tags: volumes
Revision history for this message
Marc Koderer (m-koderer) wrote :

In this state a force-deletion of the volume is even not possible:

cinder force-delete a31d4eca-52a4-47ff-91a1-e9281addc5e9
ERROR: Volume a31d4eca-52a4-47ff-91a1-e9281addc5e9 is still attached, detach volume first.

Revision history for this message
Avishay Traeger (avishay-il) wrote :

Is this a Nova issue, where it doesn't disconnect when the VM is deleted?

Revision history for this message
Haomai Wang (haomai) wrote :

Hi Marc, I can't reproduce the error with the steps you provided. I just type command in shell by hand.

Revision history for this message
Marc Koderer (m-koderer) wrote :

Hi Haomai, yes sorry that I didn't mention that. From the command line it works. Seems to be a timing problem.

Revision history for this message
Mike Perez (thingee) wrote :

I also had the same thoughts as Avishay. Not sure if we want Cinder constantly verifying in-use volumes.

Matt Riedemann (mriedem)
tags: added: volumes
Revision history for this message
John Griffith (john-griffith) wrote :

This is likely part of the cleanup on the BDM side or in the caching. There are some other issues related to this, like failed attach never cleaning up on the compute side.

Changed in cinder:
status: New → Invalid
Revision history for this message
David Scannell (dscannell) wrote :

I commented on that patch (https://review.openstack.org/#/c/36652) and I think the issue arrises when a user makes a call to the cinder API "attach_volume" directly. My understanding is that Cinder's attach_volume simply updates Cinder's database by marking the volume as being 'in-use' and associating it with the instance. It does not make a call to Nova to actually attach the volume to the instance. In fact, I believe the direction is expected to be the other way around. In other words, Nova's attach_volume will at the very end make a call to Cinder's attach_volume to inform Cinder that the volume is now attached.

Revision history for this message
Marc Koderer (m-koderer) wrote :

Thanks for the feedback - using nova client fixes the problem.

Changed in nova:
status: New → Invalid
Revision history for this message
Florent Flament (florentflament) wrote :

I actually had a similar issue with a user having volumes attached to non-existent VMs.

I discovered that the user was using the not-so-well documented Cinder API call ``POST /volumes/{volume_id}/action`` through the python-cinderclient library. There are methods ``cinderclient.v{1,2}.volumes.VolumeManager.attach`` allowing a user to "set attachment metadata" without actually attaching volumes to instances.

To attach a volume to an instance in a Python script, one has to use the ``novaclient.v1_1.volumes.VolumeManager.create_server_volume`` method.

I wrote some documentation on that issue there: http://www.florentflament.com/blog/openstack-volume-in-use-although-vm-doesnt-exist.html

Revision history for this message
Trung Trinh (trung-t-trinh) wrote :

Hi all,

This bug is related to the current bug that I'm working on (https://bugs.launchpad.net/nova/+bug/1335889).

The bug can be summarized as follows. It's impossible to delete a volume attached to the VM instance that has been deleted.

To trigger or to enforce the bug, we can simply disable the Cinder's function "detach()" in module "cinder\volume\api.py".
This is because Nova's deleting of VM always triggers the execution of the "detach()" function of Cinder.

If so, we can have the scenario where a volume is attached to an already-deleted VM.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.