This patch doesn't address the issue. Nova's driver.attach_volume happens before the volume_api.attach is even called. Cinder's attach is just a notification.
To fix this issue, we may have to do something like making reserve_volume do something useful. Maybe I have to do something in my driver to handle this?
It seems really easy to duplicate:
nova volume-attach <instanceid1> <volumeid> auto& nova volume-attach <instanceid2> <volumeid> auto&
root@nova:~# virsh list
Id Name State
----------------------------------
1 instance-00000001 running
2 instance-00000002 running
Hi guys,
This patch doesn't address the issue. Nova's driver. attach_ volume happens before the volume_api.attach is even called. Cinder's attach is just a notification.
To fix this issue, we may have to do something like making reserve_volume do something useful. Maybe I have to do something in my driver to handle this?
It seems really easy to duplicate:
nova volume-attach <instanceid1> <volumeid> auto& nova volume-attach <instanceid2> <volumeid> auto&
root@nova:~# virsh list ------- ------- ------- ------
Id Name State
-------
1 instance-00000001 running
2 instance-00000002 running
root@nova:~# virsh domblklist 1 ------- ------- ------- ------- ------- ------ state/instances /instance- 00000001/ disk by-path/ ip-10.127. 0.166:3260- iscsi-iqn. 2010-11. com.rackspace: 9cba7916- f5e4-4edb- 8bb6-7582d6609e 9c-lun- 0
Target Source
-------
vda /etc/nova/
vdb /dev/disk/
root@nova:~# virsh domblklist 2 ------- ------- ------- ------- ------- ------ state/instances /instance- 00000002/ disk by-path/ ip-10.127. 0.166:3260- iscsi-iqn. 2010-11. com.rackspace: 9cba7916- f5e4-4edb- 8bb6-7582d6609e 9c-lun- 0
Target Source
-------
vda /etc/nova/
vdb /dev/disk/