I created two vms and a volume. I attached the volume to the first vm with
nova volume-attach oneiric-1 2 /dev/vdd
I used volume-detach and then attached the volume to the other vm.
I then detached from the second vm and tried to attach again to the first vm with the same command. The
attach did not work and the following error was in the nova-compute log. I tried again but using /dev/vde and it worked. This was using kvm.
I know that with kvm there is no relation between the drive you specify with volume-attach and the actual
new drive seen by the vm. This is confusing because if I attach more than one volume and mount a filesystem for each, it seems there is no way to reattach them and know they are in the right place. Should they be mounted by UUID? Am I missing something obvious?
2012-01-16 11:07:57,853 DEBUG nova.utils [95bd2914-7ba4-46c3-a4c0-22327816df0e tester testproject] Running cmd (subprocess): sudo iscsiadm -m node -T iqn.2010-10.org.openstack:volume-00000002 -p 172.18.0.131:3260 --rescan from (pid=21157) execute /usr/lib/python2.7/dist-packages/nova/utils.py:167
2012-01-16 11:07:57,872 DEBUG nova.volume.driver [95bd2914-7ba4-46c3-a4c0-22327816df0e tester testproject] iscsiadm ('--rescan',): stdout=Rescanning session [sid: 5, target: iqn.2010-10.org.openstack:volume-00000002, portal: 172.18.0.131,3260]
stderr= from (pid=21157) _run_iscsiadm /usr/lib/python2.7/dist-packages/nova/volume/driver.py:514
2012-01-16 11:07:58,872 DEBUG nova.volume.driver [95bd2914-7ba4-46c3-a4c0-22327816df0e tester testproject] Found iSCSI node /dev/disk/by-path/ip-172.18.0.131:3260-iscsi-iqn.2010-10.org.openstack:volume-00000002-lun-0 (after 1 rescans) from (pid=21157) discover_volume /usr/lib/python2.7/dist-packages/nova/volume/driver.py:570
2012-01-16 11:08:00,326 ERROR nova.compute.manager [95bd2914-7ba4-46c3-a4c0-22327816df0e tester testproject] instance 14: attach failed /dev/vdd, removing
(nova.compute.manager): TRACE: Traceback (most recent call last):
(nova.compute.manager): TRACE: File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1280, in attach_volume
(nova.compute.manager): TRACE: mountpoint)
(nova.compute.manager): TRACE: File "/usr/lib/python2.7/dist-packages/nova/exception.py", line 100, in wrapped
(nova.compute.manager): TRACE: return f(*args, **kw)
(nova.compute.manager): TRACE: File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py", line 373, in attach_volume
(nova.compute.manager): TRACE: virt_dom.attachDevice(xml)
(nova.compute.manager): TRACE: File "/usr/lib/python2.7/dist-packages/libvirt.py", line 298, in attachDevice
(nova.compute.manager): TRACE: if ret == -1: raise libvirtError ('virDomainAttachDevice() failed', dom=self)
(nova.compute.manager): TRACE: libvirtError: operation failed: adding virtio-blk-pci,bus=pci.0,addr=0x8,drive=drive-virtio-disk3,id=virtio-disk3 device failed: Duplicate ID 'virtio-disk3' for device
(nova.compute.manager): TRACE:
(nova.compute.manager): TRACE:
2012-01-16 11:08:00,362 WARNING nova.volume.driver [95bd2914-7ba4-46c3-a4c0-22327816df0e tester testproject] ISCSI provider_location not stored, using discovery
2012-01-16 11:08:00,362 DEBUG nova.utils [95bd2914-7ba4-46c3-a4c0-22327816df0e tester testproject] Running cmd (subprocess): sudo iscsiadm -m discovery -t sendtargets -p xg03 from (pid=21157) execute /usr/lib/python2.7/dist-packages/nova/utils.py:167
2012-01-16 11:08:00,382 DEBUG nova.volume.driver [95bd2914-7ba4-46c3-a4c0-22327816df0e tester testproject] ISCSI Discovery: Found 172.18.0.131:3260,1 iqn.2010-10.org.openstack:volume-00000002 from (pid=21157) _get_iscsi_properties /usr/lib/python2.7/dist-packages/nova/volume/driver.py:487
2012-01-16 11:08:00,383 DEBUG nova.utils [95bd2914-7ba4-46c3-a4c0-22327816df0e tester testproject] Running cmd (subprocess): sudo iscsiadm -m node -T iqn.2010-10.org.openstack:volume-00000002 -p 172.18.0.131:3260 --op update -n node.startup -v manual from (pid=21157) execute /usr/lib/python2.7/dist-packages/nova/utils.py:167
2012-01-16 11:08:00,398 DEBUG nova.volume.driver [95bd2914-7ba4-46c3-a4c0-22327816df0e tester testproject] iscsiadm ('--op', 'update', '-n', 'node.startup', '-v', 'manual'): stdout= stderr= from (pid=21157) _run_iscsiadm /usr/lib/python2.7/dist-packages/nova/volume/driver.py:514
2012-01-16 11:08:00,399 DEBUG nova.utils [95bd2914-7ba4-46c3-a4c0-22327816df0e tester testproject] Running cmd (subprocess): sudo iscsiadm -m node -T iqn.2010-10.org.openstack:volume-00000002 -p 172.18.0.131:3260 --logout from (pid=21157) execute /usr/lib/python2.7/dist-packages/nova/utils.py:167
2012-01-16 11:08:01,054 DEBUG nova.volume.driver [95bd2914-7ba4-46c3-a4c0-22327816df0e tester testproject] iscsiadm ('--logout',): stdout=Logging out of session [sid: 5, target: iqn.2010-10.org.openstack:volume-00000002, portal: 172.18.0.131,3260]
Logout of [sid: 5, target: iqn.2010-10.org.openstack:volume-00000002, portal: 172.18.0.131,3260]: successful
stderr= from (pid=21157) _run_iscsiadm /usr/lib/python2.7/dist-packages/nova/volume/driver.py:514
2012-01-16 11:08:01,055 DEBUG nova.utils [95bd2914-7ba4-46c3-a4c0-22327816df0e tester testproject] Running cmd (subprocess): sudo iscsiadm -m node -T iqn.2010-10.org.openstack:volume-00000002 -p 172.18.0.131:3260 --op delete from (pid=21157) execute /usr/lib/python2.7/dist-packages/nova/utils.py:167
2012-01-16 11:08:01,070 DEBUG nova.volume.driver [95bd2914-7ba4-46c3-a4c0-22327816df0e tester testproject] iscsiadm ('--op', 'delete'): stdout= stderr= from (pid=21157) _run_iscsiadm /usr/lib/python2.7/dist-packages/nova/volume/driver.py:514
2012-01-16 11:08:01,070 ERROR nova.rpc [-] Exception during message handling
(nova.rpc): TRACE: Traceback (most recent call last):
(nova.rpc): TRACE: File "/usr/lib/python2.7/dist-packages/nova/rpc/impl_kombu.py", line 620, in _process_data
(nova.rpc): TRACE: rval = node_func(context=ctxt, **node_args)
(nova.rpc): TRACE: File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 118, in decorated_function
(nova.rpc): TRACE: function(self, context, instance_id, *args, **kwargs)
(nova.rpc): TRACE: File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1303, in attach_volume
(nova.rpc): TRACE: raise exc
(nova.rpc): TRACE: libvirtError: operation failed: adding virtio-blk-pci,bus=pci.0,addr=0x8,drive=drive-virtio-disk3,id=virtio-disk3 device failed: Duplicate ID 'virtio-disk3' for device
I've noticed this before. It seems to be impossible to reuse the same mountpoint for a second attach.
It would be nice to find out if there is a way to stop this from happening, but otherwise perhaps we should be tracking used device names and just picking the next one instead of using what is passed in by the user.