The test_resize_server_with_multiattached_volume test creates two servers and a single multiattach volume, then attaches the volume to the two servers, resizes both servers and then detaches the volume from each. This is when we attach the multiattach volume to the first instance: http://logs.openstack.org/17/554317/1/check/nova-multiattach/8e97832/logs/screen-n-cpu.txt.gz#_Mar_19_19_48_00_429470 And the shareable flag is set: Mar 19 19:48:00.429470 ubuntu-xenial-inap-mtl01-0003062768 nova-compute[27735]: DEBUG nova.virt.libvirt.guest [None req-4a992c4c-d85a-4cd8-9896-569924500d5a tempest-AttachVolumeMultiAttachTest-1944074929 tempest-AttachVolumeMultiAttachTest-1944074929] attach device xml: Mar 19 19:48:00.429807 ubuntu-xenial-inap-mtl01-0003062768 nova-compute[27735]: Mar 19 19:48:00.430097 ubuntu-xenial-inap-mtl01-0003062768 nova-compute[27735]: Mar 19 19:48:00.430383 ubuntu-xenial-inap-mtl01-0003062768 nova-compute[27735]: Mar 19 19:48:00.430670 ubuntu-xenial-inap-mtl01-0003062768 nova-compute[27735]: 652600d5-f6dc-4089-ba95-d71d7640cafa Mar 19 19:48:00.430969 ubuntu-xenial-inap-mtl01-0003062768 nova-compute[27735]: Mar 19 19:48:00.431268 ubuntu-xenial-inap-mtl01-0003062768 nova-compute[27735]: Mar 19 19:48:00.431544 ubuntu-xenial-inap-mtl01-0003062768 nova-compute[27735]: {{(pid=27735) attach_device /opt/stack/new/nova/nova/virt/libvirt/guest.py:302}} That's instance 0eed0237-245e-4a18-9e30-9e72accd36c6. This is when we attach the multiattach volume to the second instance: http://logs.openstack.org/17/554317/1/check/nova-multiattach/8e97832/logs/screen-n-cpu.txt.gz#_Mar_19_19_48_04_769855 And the shareable flag is set: Mar 19 19:48:04.769855 ubuntu-xenial-inap-mtl01-0003062768 nova-compute[27735]: DEBUG nova.virt.libvirt.guest [None req-445edbca-bb87-489e-bf14-eb646e570690 tempest-AttachVolumeMultiAttachTest-1944074929 tempest-AttachVolumeMultiAttachTest-1944074929] attach device xml: Mar 19 19:48:04.770106 ubuntu-xenial-inap-mtl01-0003062768 nova-compute[27735]: Mar 19 19:48:04.770301 ubuntu-xenial-inap-mtl01-0003062768 nova-compute[27735]: Mar 19 19:48:04.770483 ubuntu-xenial-inap-mtl01-0003062768 nova-compute[27735]: Mar 19 19:48:04.770667 ubuntu-xenial-inap-mtl01-0003062768 nova-compute[27735]: 652600d5-f6dc-4089-ba95-d71d7640cafa Mar 19 19:48:04.770845 ubuntu-xenial-inap-mtl01-0003062768 nova-compute[27735]: Mar 19 19:48:04.771027 ubuntu-xenial-inap-mtl01-0003062768 nova-compute[27735]: Mar 19 19:48:04.771217 ubuntu-xenial-inap-mtl01-0003062768 nova-compute[27735]: {{(pid=27735) attach_device /opt/stack/new/nova/nova/virt/libvirt/guest.py:302}} Then we start to resize the servers. We correctly count the number of servers on the same host and don't disconnect the volume from the host when detaching the first instance: http://logs.openstack.org/17/554317/1/check/nova-multiattach/8e97832/logs/screen-n-cpu.txt.gz#_Mar_19_19_48_12_057443 Mar 19 19:48:12.057443 ubuntu-xenial-inap-mtl01-0003062768 nova-compute[27735]: INFO nova.virt.libvirt.driver [None req-f044373a-53dd-4de7-b881-ff511ecb3a70 tempest-AttachVolumeMultiAttachTest-1944074929 tempest-AttachVolumeMultiAttachTest-1944074929] [instance: 0eed0237-245e-4a18-9e30-9e72accd36c6] Detected multiple connections on this host for volume: 652600d5-f6dc-4089-ba95-d71d7640cafa, skipping target disconnect. Then when we go to create that instance during the resize, the shareable flag isn't set and it fails: http://logs.openstack.org/17/554317/1/check/nova-multiattach/8e97832/logs/screen-n-cpu.txt.gz#_Mar_19_19_48_16_261051 Shortly before that, the block_device_info is logged for the guest XML and the bdm connection_info doesn't have the multiattach flag set so that's probably why shareable doesn't get put into the disk config xml: http://logs.openstack.org/17/554317/1/check/nova-multiattach/8e97832/logs/screen-n-cpu.txt.gz#_Mar_19_19_48_15_706213 But I'm not sure why this would have anything to do with using a newer version of libvirt. Guess I need to trace through the resize request ID closer, which for the first server is: req-f044373a-53dd-4de7-b881-ff511ecb3a70