volumes are allowed to attach to the same instance more than once
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Cinder |
Fix Released
|
Critical
|
Eric Harney |
Bug Description
With a volume that doesn't support multi-attach, you can attach a volume to the same instance more than once.
I'm not sure how easy this is to fix, presumably we need to extend the checks in _attachment_reserve around here:
https:/
to fail if we are attaching to the same instance again.
But, doing that alone might interfere with the live migration case where we do a fake multi-attach to two separate hosts. So, maybe the check needed is that we only allow attaching to the same instance id if on two different compute hosts. (?)
Below is a reproduction script and some output of when this happens.
$ cat attach_script.sh
#!/usr/bin/bash -x
OS_TOKEN=
OPENSTACKENDPOI
INSTANCEUUID=
VOLUMEUUID=
#NOVA_URL=https:/
NOVA_URL=https:/
INSTANCE_
VOLUME_
function call {
curl -k -s -H "X-Auth-Token: $OS_TOKEN" -H "X-Subject-Token: $OS_TOKEN" "$@" | jq .
}
echo "Using instance_id: $INSTANCE_ID"
echo "Using volume_id: $VOLUME_ID"
echo
echo "Attachments before test:"
call $NOVA_URL/
echo
echo "Attempting 10 concurrent attachments..."
for i in {1..10}
do
call -H 'Content-Type: application/json' $NOVA_URL/
done
sleep 15
echo
echo "Attachements after test:"
call $NOVA_URL/
$ cinder show 6fb23e37-
+------
| Property | Value |
+------
| attached_servers | ['c13aa08a-
| attachment_ids | ['ac58a579-
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2019-06-
| description | None |
| encrypted | False |
| id | 6fb23e37-
| metadata | |
| migration_status | None |
| multiattach | False |
| name | None |
| os-vol-
| os-vol-
| os-vol-
| os-vol-
| replication_status | None |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | in-use |
| updated_at | 2019-06-
| user_id | ed32ac3ec3d6468
| volume_type | ceph2 |
+------
$ sudo virsh domblklist 1
Target Source
-------
vda vms/c13aa08a-
vdb volumes/
vdc volumes/
vdd volumes/
(vdc and vdd are the same volume)
$ sudo virsh dumpxml 1
...snipped
<disk type='network' device='disk'>
<driver name='qemu' type='raw' cache='writeback' discard='unmap'/>
<auth username='cinder'>
<secret type='ceph' uuid='c198a0ce-
</auth>
<source protocol='rbd' name='volumes/
<host name='10.16.151.37' port='6789'/>
</source>
<target dev='vdc' bus='virtio'/>
<
<alias name='virtio-
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</disk>
<disk type='network' device='disk'>
<driver name='qemu' type='raw' cache='writeback' discard='unmap'/>
<auth username='cinder'>
<secret type='ceph' uuid='c198a0ce-
</auth>
<source protocol='rbd' name='volumes/
<host name='10.16.151.37' port='6789'/>
</source>
<target dev='vdd' bus='virtio'/>
<
<alias name='virtio-
<address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
</disk>
Interesting note: if you try this with volume encryption, Nova will fail subsequent attaches with:
libvirtError: internal error: a secret with UUID
64311d25-
Changed in cinder: | |
assignee: | nobody → Eric Harney (eharney) |
I tested in rocky(Ubuntu18.04) but OpenStack returns "ERROR (BadRequest): Invalid volume: volume 7609d0be- 638a-440c- 8e40-5e12376483 65 already attached". I cannot reproduce your problem.