I was using devstack on trunk. I attached a volume at /dev/sdb to a vm using nova client but that device already existed. The nova command returned without error but obviously did not do anything and the following showed up in the nova logs:
Jan 11 14:14:45 xg06eth0 2012-01-11 14:14:45,164 ERROR nova.compute.manager [aa780ef5-4591-4d5f-b113-9ae63568647b demo 2] insta\
nce 3d37d21e-8fba-46a4-a0e1-d137f1d17597: attach failed /dev/vdb, removing#012(nova.compute.manager): TRACE: Traceback (most rec\
ent call last):#012(nova.compute.manager): TRACE: File "/opt/stack/nova/nova/compute/manager.py", line 1529, in attach_volume#\
012(nova.compute.manager): TRACE: mountpoint)#012(nova.compute.manager): TRACE: File "/opt/stack/nova/nova/exception.py", \
line 130, in wrapped#012(nova.compute.manager): TRACE: return f(*args, **kw)#012(nova.compute.manager): TRACE: File "/opt/\
stack/nova/nova/virt/libvirt/connection.py", line 428, in attach_volume#012(nova.compute.manager): TRACE: virt_dom.attachDev\
ice(xml)#012(nova.compute.manager): TRACE: File "/usr/lib/python2.7/dist-packages/libvirt.py", line 298, in attachDevice#012(n\
ova.compute.manager): TRACE: if ret == -1: raise libvirtError ('virDomainAttachDevice() failed', dom=self)#012(nova.compute.\
manager): TRACE: libvirtError: operation failed: target vdb already exists#012(nova.compute.manager): TRACE:
Jan 11 14:14:45 xg06eth0 2012-01-11 14:14:45,194 DEBUG nova.rpc [-] Making asynchronous call on volume.xg06 ... from (pid=8603)\
multicall /opt/stack/nova/nova/rpc/impl_kombu.py:759
Jan 11 14:14:45 xg06eth0 2012-01-11 14:14:45,195 DEBUG nova.rpc [-] MSG_ID is 6735676270c241c4859384769641ca56 from (pid=8603) \
multicall /opt/stack/nova/nova/rpc/impl_kombu.py:762
Jan 11 14:14:45 xg06eth0 2012-01-11 14:14:45,202 DEBUG nova.rpc [-] received {u'_context_roles': [u'Member', u'sysadmin', u'net\
admin'], u'_msg_id': u'6735676270c241c4859384769641ca56', u'_context_read_deleted': u'no', u'_context_request_id': u'aa780ef5-45\
91-4d5f-b113-9ae63568647b', u'args': {u'volume_id': 1, u'address': u'172.18.0.146'}, u'_context_auth_token': u'07302338-e85e-49c\
f-a50c-c230a30ad637', u'_context_strategy': u'keystone', u'_context_is_admin': True, u'_context_project_id': u'2', u'_context_ti\
mestamp': u'2012-01-11T19:14:42.636546', u'_context_user_id': u'demo', u'method': u'terminate_connection', u'_context_remote_add\
ress': u'172.18.0.146'} from (pid=8709) __call__ /opt/stack/nova/nova/rpc/impl_kombu.py:629
Jan 11 14:14:45 xg06eth0 2012-01-11 14:14:45,202 DEBUG nova.rpc [-] unpacked context: {'user_id': u'demo', 'roles': [u'Member',\
u'sysadmin', u'netadmin'], 'timestamp': u'2012-01-11T19:14:42.636546', 'auth_token': u'07302338-e85e-49cf-a50c-c230a30ad637', '\
msg_id': u'6735676270c241c4859384769641ca56', 'remote_address': u'172.18.0.146', 'strategy': u'keystone', 'is_admin': True, 'req\
uest_id': u'aa780ef5-4591-4d5f-b113-9ae63568647b', 'project_id': u'2', 'read_deleted': u'no'} from (pid=8709) _unpack_context /o\
pt/stack/nova/nova/rpc/impl_kombu.py:675
Jan 11 14:14:45 xg06eth0 2012-01-11 14:14:45,245 ERROR nova.rpc [-] Exception during message handling#012(nova.rpc): TRACE: Trac\
eback (most recent call last):#012(nova.rpc): TRACE: File "/opt/stack/nova/nova/rpc/impl_kombu.py", line 649, in _process_data\
#012(nova.rpc): TRACE: rval = node_func(context=ctxt, **node_args)#012(nova.rpc): TRACE: File "/opt/stack/nova/nova/except\
ion.py", line 130, in wrapped#012(nova.rpc): TRACE: return f(*args, **kw)#012(nova.rpc): TRACE: File "/opt/stack/nova/nova\
/compute/manager.py", line 126, in decorated_function#012(nova.rpc): TRACE: function(self, context, instance_uuid, *args, **\
kwargs)#012(nova.rpc): TRACE: File "/opt/stack/nova/nova/compute/manager.py", line 149, in decorated_function#012(nova.rpc): T\
RACE: self.add_instance_fault_from_exc(context, instance_uuid, e)#012(nova.rpc): TRACE: File "/usr/lib/python2.7/contextli\
b.py", line 24, in __exit__#012(nova.rpc): TRACE: self.gen.next()#012(nova.rpc): TRACE: File "/opt/stack/nova/nova/compute\
/manager.py", line 144, in decorated_function#012(nova.rpc): TRACE: return function(self, context, instance_uuid, *args, **k\
wargs)#012(nova.rpc): TRACE: File "/opt/stack/nova/nova/compute/manager.py", line 1536, in attach_volume#012(nova.rpc): TRACE:\
address)#012(nova.rpc): TRACE: File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__#012(nova.rpc): TRACE: se\
lf.gen.next()#012(nova.rpc): TRACE: File "/opt/stack/nova/nova/compute/manager.py", line 1529, in attach_volume#012(nova.rpc):\
TRACE: mountpoint)#012(nova.rpc): TRACE: File "/opt/stack/nova/nova/exception.py", line 130, in wrapped#012(nova.rpc): TR\
ACE: return f(*args, **kw)#012(nova.rpc): TRACE: File "/opt/stack/nova/nova/virt/libvirt/connection.py", line 428, in atta\
ch_volume#012(nova.rpc): TRACE: virt_dom.attachDevice(xml)#012(nova.rpc): TRACE: File "/usr/lib/python2.7/dist-packages/li\
bvirt.py", line 298, in attachDevice#012(nova.rpc): TRACE: if ret == -1: raise libvirtError ('virDomainAttachDevice() failed\
', dom=self)#012(n
I was also not able to attach to vdc which was not already there. There were no errors while devstack was doing the nova volume stuff.
Jan 11 14:41:58 xg06eth0 2012-01-11 14:41:58,190 AUDIT nova.api. openstack. v2.contrib. volumes [229ab0dc- 3085-4f4c- bb80-d8104c96b\ 8fba-46a4- a0e1-d137f1d175 97 at /dev/vdc 3085-4f4c- bb80-d8104c96be f8 demo 2] Making asynchrono\ nova/nova/ rpc/impl_ kombu.py: 784 request_ id': u'229ab0dc- 3085-4f4c- bb80-d8104c96be f8', u'_context_ read_deleted' : u'no', u'args': {u'instance_\ 8fba-46a4- a0e1-d137f1d175 97', u'mountpoint': u'/dev/vdc', u'volume_id': 1}, u'_context_ auth_token' : u'07302338\ a50c-c230a30ad6 37', u'_context_ strategy' : u'keystone', u'_context_ is_admin' : False, u'_context_ project_ id': u'2', u'_\ 01-11T19: 41:58.186031' , u'_context_ user_id' : u'demo', u'method': u'attach_volume', u'_context_remote_\ nova/nova/ rpc/impl_ kombu.py: 629 openstack. wsgi [229ab0dc- 3085-4f4c- bb80-d8104c96be f8 demo 2] htt\ 18.0.146: 8774/v1. 1/2/servers/ 3d37d21e- 8fba-46a4- a0e1-d137f1d175 97/os-volume_ attachments returned with HTTP 200 01-11T19: 41:58.186031' , 'auth_token': u'07302338- e85e-49cf- a50c-c230a30ad6 37', '\ 3085-4f4c\ ef8', 'project_id': u'2', 'read_deleted': u'no'} from (pid=8603) _unpack_context /opt/stack/ nova/nova/ rpc/impl_ ko\ manager [229ab0dc- 3085-4f4c- bb80-d8104c96be f8 demo 2] check_\ manager [229ab0dc- 3085-4f4c- bb80-d8104c96be f8 demo 2] check_\ compute. manager. ComputeManager object at 0x31a96d0>| |<nova. rpc.impl_ kombu.RpcContex t object at\ 8fba-46a4- a0e1-d137f1d175 97| manager [229ab0dc- 3085-4f4c- bb80-d8104c96be f8 demo 2] insta\ 8fba-46a4- a0e1-d137f1d175 97: getting locked state from (pid=8603) get_lock /opt/stack/ nova/nova/ compute/ manager. py:\ manager [229ab0dc- 3085-4f4c- bb80-d8104c96be f8 demo 2] check_\ manager [229ab0dc- 3085-4f4c- bb80-d8104c96be f8 demo 2] check_\
ef8 demo 2] Attach volume 1 to instance 3d37d21e-
Jan 11 14:41:58 xg06eth0 2012-01-11 14:41:58,265 DEBUG nova.rpc [229ab0dc-
us cast on compute.xg06... from (pid=8466) cast /opt/stack/
Jan 11 14:41:58 xg06eth0 2012-01-11 14:41:58,268 DEBUG nova.rpc [-] received {u'_context_roles': [u'Member', u'sysadmin', u'net\
admin'], u'_context_
uuid': u'3d37d21e-
-e85e-49cf-
context_timestamp': u'2012-
address': u'172.18.0.146'} from (pid=8603) __call__ /opt/stack/
Jan 11 14:41:58 xg06eth0 2012-01-11 14:41:58,268 INFO nova.api.
p://172.
Jan 11 14:41:58 xg06eth0 2012-01-11 14:41:58,268 DEBUG nova.rpc [-] unpacked context: {'user_id': u'demo', 'roles': [u'Member',\
u'sysadmin', u'netadmin'], 'timestamp': u'2012-
msg_id': None, 'remote_address': u'172.18.0.146', 'strategy': u'keystone', 'is_admin': False, 'request_id': u'229ab0dc-
-bb80-d8104c96b
mbu.py:675
Jan 11 14:41:58 xg06eth0 2012-01-11 14:41:58,269 INFO nova.compute.
instance_lock: decorating: |<function attach_volume at 0x3393ed8>|
Jan 11 14:41:58 xg06eth0 2012-01-11 14:41:58,269 INFO nova.compute.
instance_lock: arguments: |<nova.
0x4a51fd0>| |3d37d21e-
Jan 11 14:41:58 xg06eth0 2012-01-11 14:41:58,270 DEBUG nova.compute.
nce 3d37d21e-
1426
Jan 11 14:41:58 xg06eth0 2012-01-11 14:41:58,320 INFO nova.compute.
instance_lock: locked: |False|
Jan 11 14:41:58 xg06eth0 2012-01-11 14:41:58,320 INFO nova.compute.
instance_lock: admin: |False|
Jan ...