nova-compute cannot attach volume if mountpoint exists

Bug #884635 reported by livemoon
20
This bug affects 3 people
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Invalid
Medium
Unassigned

Bug Description

I used "nova volume-list" to find a volume is available status and attach no instance, but actually it is attached to a instance, and mount with /dev/vdb in that instance and can read/write. Then I detach this volume and want to attach again. But when I use "nova attach server_id volume_id /dev/vdb", it is errro show that /dev/vdb exists. In instance, I use pvdisplay or fdisk -l , both show /dev/vdb error.
I think where locked the /dev/vdb in instance, may it is a bug ?

2011-11-01 15:24:28,938 ERROR nova.compute.manager [84abd913-684b-402c-b312-a986263e0d20 admin 1] instance 16: attach failed /dev/vdb, removing
(nova.compute.manager): TRACE: Traceback (most recent call last):
(nova.compute.manager): TRACE: File "/data/nova/nova/compute/manager.py", line 1359, in attach_volume
(nova.compute.manager): TRACE: mountpoint)
(nova.compute.manager): TRACE: File "/data/nova/nova/exception.py", line 113, in wrapped
(nova.compute.manager): TRACE: return f(*args, **kw)
(nova.compute.manager): TRACE: File "/data/nova/nova/virt/libvirt/connection.py", line 380, in attach_volume
(nova.compute.manager): TRACE: virt_dom.attachDevice(xml)
(nova.compute.manager): TRACE: File "/usr/lib/python2.7/dist-packages/libvirt.py", line 298, in attachDevice
(nova.compute.manager): TRACE: if ret == -1: raise libvirtError ('virDomainAttachDevice() failed', dom=self)
(nova.compute.manager): TRACE: libvirtError: operation failed: target vdb already exists
(nova.compute.manager): TRACE:
2011-11-01 15:24:28,974 DEBUG nova.rpc [-] Making asynchronous call on volume.node2 ... from (pid=6733) multicall /data/nova/nova/rpc/impl_kombu.py:721
2011-11-01 15:24:28,975 DEBUG nova.rpc [-] MSG_ID is b10ea92631694842a019fd7e29a0088a from (pid=6733) multicall /data/nova/nova/rpc/impl_kombu.py:724
2011-11-01 15:24:29,043 ERROR nova.rpc [-] Exception during message handling
(nova.rpc): TRACE: Traceback (most recent call last):
(nova.rpc): TRACE: File "/data/nova/nova/rpc/impl_kombu.py", line 620, in _process_data
(nova.rpc): TRACE: rval = node_func(context=ctxt, **node_args)
(nova.rpc): TRACE: File "/data/nova/nova/compute/manager.py", line 120, in decorated_function
(nova.rpc): TRACE: function(self, context, instance_id, *args, **kwargs)
(nova.rpc): TRACE: File "/data/nova/nova/compute/manager.py", line 1368, in attach_volume
(nova.rpc): TRACE: raise exc
(nova.rpc): TRACE: TypeError: __init__() takes at least 2 arguments (1 given)
(nova.rpc): TRACE:

Tags: volume
affects: glance → nova
Zhongyue Luo (zyluo)
summary: - nova-compute cannot attach volume since of exists mountpoint
+ nova-compute cannot attach volume if mountpoint exists
Brian Waldon (bcwaldon)
Changed in nova:
status: New → Confirmed
importance: Undecided → Medium
tags: added: volume
Revision history for this message
Chris Fattarsi (chris-fattarsi) wrote :

I usually attach the volume by specifying the device as /dev/vdz even through on the VM it will be recognized as the next device in line, eg /dev/vdc || /dev/vdd but this way I avoid the above failure.

How should this be fixed?
Should nova fail if it cannot set the device as it will be seen on the VM?
Or.. something else?

Revision history for this message
Vish Ishaya (vishvananda) wrote : Re: [Bug 884635] Re: nova-compute cannot attach volume if mountpoint exists
Download full text (3.5 KiB)

There isn't really any way for libvirt to know where the guest will stick the vm. I think the only solution is to do some udev magic in the guest to figure out where it supposed to show the pci device by talking to the metadata server.

On Feb 27, 2012, at 5:58 PM, Chris Fattarsi wrote:

> I usually attach the volume by specifying the device as /dev/vdz even
> through on the VM it will be recognized as the next device in line, eg
> /dev/vdc || /dev/vdd but this way I avoid the above failure.
>
> How should this be fixed?
> Should nova fail if it cannot set the device as it will be seen on the VM?
> Or.. something else?
>
> --
> You received this bug notification because you are subscribed to
> OpenStack Compute (nova).
> https://bugs.launchpad.net/bugs/884635
>
> Title:
> nova-compute cannot attach volume if mountpoint exists
>
> Status in OpenStack Compute (Nova):
> Confirmed
>
> Bug description:
> I used "nova volume-list" to find a volume is available status and attach no instance, but actually it is attached to a instance, and mount with /dev/vdb in that instance and can read/write. Then I detach this volume and want to attach again. But when I use "nova attach server_id volume_id /dev/vdb", it is errro show that /dev/vdb exists. In instance, I use pvdisplay or fdisk -l , both show /dev/vdb error.
> I think where locked the /dev/vdb in instance, may it is a bug ?
>
> 2011-11-01 15:24:28,938 ERROR nova.compute.manager [84abd913-684b-402c-b312-a986263e0d20 admin 1] instance 16: attach failed /dev/vdb, removing
> (nova.compute.manager): TRACE: Traceback (most recent call last):
> (nova.compute.manager): TRACE: File "/data/nova/nova/compute/manager.py", line 1359, in attach_volume
> (nova.compute.manager): TRACE: mountpoint)
> (nova.compute.manager): TRACE: File "/data/nova/nova/exception.py", line 113, in wrapped
> (nova.compute.manager): TRACE: return f(*args, **kw)
> (nova.compute.manager): TRACE: File "/data/nova/nova/virt/libvirt/connection.py", line 380, in attach_volume
> (nova.compute.manager): TRACE: virt_dom.attachDevice(xml)
> (nova.compute.manager): TRACE: File "/usr/lib/python2.7/dist-packages/libvirt.py", line 298, in attachDevice
> (nova.compute.manager): TRACE: if ret == -1: raise libvirtError ('virDomainAttachDevice() failed', dom=self)
> (nova.compute.manager): TRACE: libvirtError: operation failed: target vdb already exists
> (nova.compute.manager): TRACE:
> 2011-11-01 15:24:28,974 DEBUG nova.rpc [-] Making asynchronous call on volume.node2 ... from (pid=6733) multicall /data/nova/nova/rpc/impl_kombu.py:721
> 2011-11-01 15:24:28,975 DEBUG nova.rpc [-] MSG_ID is b10ea92631694842a019fd7e29a0088a from (pid=6733) multicall /data/nova/nova/rpc/impl_kombu.py:724
> 2011-11-01 15:24:29,043 ERROR nova.rpc [-] Exception during message handling
> (nova.rpc): TRACE: Traceback (most recent call last):
> (nova.rpc): TRACE: File "/data/nova/nova/rpc/impl_kombu.py", line 620, in _process_data
> (nova.rpc): TRACE: rval = node_func(context=ctxt, **node_args)
> (nova.rpc): TRACE: File "/data/nova/nova/compute/manager.py", line 120, in decorated_function
> (nova.rpc): TRACE: ...

Read more...

Revision history for this message
Brian Waldon (bcwaldon) wrote :

I'm not sure how we fix this bug. Ideas?

Revision history for this message
Chris Fattarsi (chris-fattarsi) wrote :

I propose making the mountpoint an optional argument and handling the error in the above case so that it doesn't fail.

It is inconsistent that it fails when setting a point in use but quietly does something else when setting a higher than the next ordinal.

Revision history for this message
yong sheng gong (gongysh) wrote :

I cannot reproduce it in latest master branch

Revision history for this message
Dan Smith (danms) wrote :

Aside from a small collateral bug (https://review.openstack.org/#/c/10283/), I believe this is something coming from libvirt/qemu itself. It seems that it's likely related to not running ACPI/PCI hotplugging in the guest, which means that a detach cannot actually fully clean up a given device, and thus an attempt to re-attach at the same point must fail for safety reasons. See this bug for details:

https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/897750

I think that this one needs to be marked Invalid as a result of it not being related to nova.

Changed in nova:
status: Confirmed → Invalid
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.