xenapi: error on volume detach StorageError: Unable to find SR from VBD

Bug #1101229 reported by Mate Lakat
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Fix Released
Undecided
Mate Lakat

Bug Description

Steps to reproduce:

. devstack/openrc admin
nova boot --flavor=m1.small --image=$IMAGEID testmachine
cinder create 1
nova volume-attach $INSTANCEID $VOLUMEID /dev/xvdb
nova volume-detach $INSTANCEID $VOLUMEID

Stacktrace on n-cpu:

AUDIT nova.compute.manager [req-27df4d59-8b96-432d-9c61-559e3311c05e admin demo] [instance: 3679b9d8-c8ea-4ddc-a13f-a5d0d630d6f4] Detach volume 3cbb15ba-a918-4240-be65-6444baf526f2 from mountpoint /dev/xvdb
ERROR nova.virt.xenapi.volume_utils [req-27df4d59-8b96-432d-9c61-559e3311c05e admin demo] ['HANDLE_INVALID', 'VBD', 'OpaqueRef:202022d2-babd-a1f6-afd8-b79389b761cf']
TRACE nova.virt.xenapi.volume_utils Traceback (most recent call last):
TRACE nova.virt.xenapi.volume_utils File "/opt/stack/nova/nova/virt/xenapi/volume_utils.py", line 173, in find_sr_from_vbd
TRACE nova.virt.xenapi.volume_utils vdi_ref = session.call_xenapi("VBD.get_VDI", vbd_ref)
TRACE nova.virt.xenapi.volume_utils File "/opt/stack/nova/nova/virt/xenapi/driver.py", line 709, in call_xenapi
TRACE nova.virt.xenapi.volume_utils return session.xenapi_request(method, args)
TRACE nova.virt.xenapi.volume_utils File "/usr/local/lib/python2.7/dist-packages/XenAPI.py", line 133, in xenapi_request
TRACE nova.virt.xenapi.volume_utils result = _parse_result(getattr(self, methodname)(*full_params))
TRACE nova.virt.xenapi.volume_utils File "/usr/local/lib/python2.7/dist-packages/XenAPI.py", line 203, in _parse_result
TRACE nova.virt.xenapi.volume_utils raise Failure(result['ErrorDescription'])
TRACE nova.virt.xenapi.volume_utils Failure: ['HANDLE_INVALID', 'VBD', 'OpaqueRef:202022d2-babd-a1f6-afd8-b79389b761cf']
TRACE nova.virt.xenapi.volume_utils
ERROR nova.virt.xenapi.volumeops [req-27df4d59-8b96-432d-9c61-559e3311c05e admin demo] Unable to find SR from VBD OpaqueRef:202022d2-babd-a1f6-afd8-b79389b761cf
TRACE nova.virt.xenapi.volumeops Traceback (most recent call last):
TRACE nova.virt.xenapi.volumeops File "/opt/stack/nova/nova/virt/xenapi/volumeops.py", line 146, in detach_volume
TRACE nova.virt.xenapi.volumeops sr_ref = volume_utils.find_sr_from_vbd(self._session, vbd_ref)
TRACE nova.virt.xenapi.volumeops File "/opt/stack/nova/nova/virt/xenapi/volume_utils.py", line 177, in find_sr_from_vbd
TRACE nova.virt.xenapi.volumeops raise StorageError(_('Unable to find SR from VBD %s') % vbd_ref)
TRACE nova.virt.xenapi.volumeops StorageError: Unable to find SR from VBD OpaqueRef:202022d2-babd-a1f6-afd8-b79389b761cf
TRACE nova.virt.xenapi.volumeops
ERROR nova.compute.manager [req-27df4d59-8b96-432d-9c61-559e3311c05e admin demo] [instance: 3679b9d8-c8ea-4ddc-a13f-a5d0d630d6f4] Faild to detach volume 3cbb15ba-a918-4240-be65-6444baf526f2 from /dev/xvdb
TRACE nova.compute.manager [instance: 3679b9d8-c8ea-4ddc-a13f-a5d0d630d6f4] Traceback (most recent call last):
TRACE nova.compute.manager [instance: 3679b9d8-c8ea-4ddc-a13f-a5d0d630d6f4] File "/opt/stack/nova/nova/compute/manager.py", line 2558, in _detach_volume
TRACE nova.compute.manager [instance: 3679b9d8-c8ea-4ddc-a13f-a5d0d630d6f4] mp)
TRACE nova.compute.manager [instance: 3679b9d8-c8ea-4ddc-a13f-a5d0d630d6f4] File "/opt/stack/nova/nova/virt/xenapi/driver.py", line 370, in detach_volume
TRACE nova.compute.manager [instance: 3679b9d8-c8ea-4ddc-a13f-a5d0d630d6f4] mountpoint)
TRACE nova.compute.manager [instance: 3679b9d8-c8ea-4ddc-a13f-a5d0d630d6f4] File "/opt/stack/nova/nova/virt/xenapi/volumeops.py", line 150, in detach_volume
TRACE nova.compute.manager [instance: 3679b9d8-c8ea-4ddc-a13f-a5d0d630d6f4] raise Exception(_('Error purging SR %s') % sr_ref)
TRACE nova.compute.manager [instance: 3679b9d8-c8ea-4ddc-a13f-a5d0d630d6f4] UnboundLocalError: local variable 'sr_ref' referenced before assignment
TRACE nova.compute.manager [instance: 3679b9d8-c8ea-4ddc-a13f-a5d0d630d6f4]
ERROR nova.openstack.common.rpc.amqp [req-27df4d59-8b96-432d-9c61-559e3311c05e admin demo] Exception during message handling
TRACE nova.openstack.common.rpc.amqp Traceback (most recent call last):
TRACE nova.openstack.common.rpc.amqp File "/opt/stack/nova/nova/openstack/common/rpc/amqp.py", line 276, in _process_data
TRACE nova.openstack.common.rpc.amqp rval = self.proxy.dispatch(ctxt, version, method, **args)
TRACE nova.openstack.common.rpc.amqp File "/opt/stack/nova/nova/openstack/common/rpc/dispatcher.py", line 133, in dispatch
TRACE nova.openstack.common.rpc.amqp return getattr(proxyobj, method)(ctxt, **kwargs)
TRACE nova.openstack.common.rpc.amqp File "/opt/stack/nova/nova/exception.py", line 110, in wrapped
TRACE nova.openstack.common.rpc.amqp temp_level, payload)
TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
TRACE nova.openstack.common.rpc.amqp self.gen.next()
TRACE nova.openstack.common.rpc.amqp File "/opt/stack/nova/nova/exception.py", line 89, in wrapped
TRACE nova.openstack.common.rpc.amqp return f(self, context, *args, **kw)
TRACE nova.openstack.common.rpc.amqp File "/opt/stack/nova/nova/compute/manager.py", line 206, in decorated_function
TRACE nova.openstack.common.rpc.amqp pass
TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
TRACE nova.openstack.common.rpc.amqp self.gen.next()
TRACE nova.openstack.common.rpc.amqp File "/opt/stack/nova/nova/compute/manager.py", line 192, in decorated_function
TRACE nova.openstack.common.rpc.amqp return function(self, context, *args, **kwargs)
TRACE nova.openstack.common.rpc.amqp File "/opt/stack/nova/nova/compute/manager.py", line 233, in decorated_function
TRACE nova.openstack.common.rpc.amqp kwargs['instance'], e, sys.exc_info())
TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
TRACE nova.openstack.common.rpc.amqp self.gen.next()
TRACE nova.openstack.common.rpc.amqp File "/opt/stack/nova/nova/compute/manager.py", line 221, in decorated_function
TRACE nova.openstack.common.rpc.amqp return function(self, context, *args, **kwargs)
TRACE nova.openstack.common.rpc.amqp File "/opt/stack/nova/nova/compute/manager.py", line 2593, in detach_volume
TRACE nova.openstack.common.rpc.amqp self._detach_volume(context, instance, bdm)
TRACE nova.openstack.common.rpc.amqp File "/opt/stack/nova/nova/compute/manager.py", line 2565, in _detach_volume
TRACE nova.openstack.common.rpc.amqp self.volume_api.roll_detaching(context, volume)
TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
TRACE nova.openstack.common.rpc.amqp self.gen.next()
TRACE nova.openstack.common.rpc.amqp File "/opt/stack/nova/nova/compute/manager.py", line 2558, in _detach_volume
TRACE nova.openstack.common.rpc.amqp mp)
TRACE nova.openstack.common.rpc.amqp File "/opt/stack/nova/nova/virt/xenapi/driver.py", line 370, in detach_volume
TRACE nova.openstack.common.rpc.amqp mountpoint)
TRACE nova.openstack.common.rpc.amqp File "/opt/stack/nova/nova/virt/xenapi/volumeops.py", line 150, in detach_volume
TRACE nova.openstack.common.rpc.amqp raise Exception(_('Error purging SR %s') % sr_ref)
TRACE nova.openstack.common.rpc.amqp UnboundLocalError: local variable 'sr_ref' referenced before assignment
TRACE nova.openstack.common.rpc.amqp

So 2 problems:
- One for the volume detach error
- One for the UnboundLocalError

Mate Lakat (mate-lakat)
description: updated
Revision history for this message
Mate Lakat (mate-lakat) wrote :

This bug was introduced by:
https://review.openstack.org/#/c/19219/

The problem, is that the vbd is already destroyed when we try to get hold of the sr

Mate Lakat (mate-lakat)
Changed in nova:
assignee: nobody → Mate Lakat (mate-lakat)
status: New → In Progress
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to nova (master)

Fix proposed to branch: master
Review: https://review.openstack.org/20031

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to nova (master)

Reviewed: https://review.openstack.org/20031
Committed: http://github.com/openstack/nova/commit/41848d419cc87f3344c2d071e42561bb52c0c577
Submitter: Jenkins
Branch: master

commit 41848d419cc87f3344c2d071e42561bb52c0c577
Author: Mate Lakat <email address hidden>
Date: Fri Jan 18 15:12:15 2013 +0000

    XenAPI: Fix volume detach

    fixes bug 1101229

    The code-cleanup and code-move changes from this patch:
    https://review.openstack.org/#/c/19219/
    introduced a problem: the volume_utils.find_sr_from_vbd method was
    called after the vbd has already been destroyed. This change fixes the
    issue by moving the sr discovery before the vbd destruction, and tests
    the call order by using side effects.

    Change-Id: Ide4f8ac810f98bb192909f5f0408affc940e7446

Changed in nova:
status: In Progress → Fix Committed
Thierry Carrez (ttx)
Changed in nova:
milestone: none → grizzly-3
status: Fix Committed → Fix Released
Thierry Carrez (ttx)
Changed in nova:
milestone: grizzly-3 → 2013.1
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.