Comment 13 for bug 1807723

Revision history for this message
Matt Riedemann (mriedem) wrote : Re: swap multiattach volume intermittently fails when servers are on different hosts

Oh I guess tempest will detach the volume from each server before deleting each server, because of it's cleanup steps.

So since we do this to setup:

1. create volume1
2. create volume2
3. create server1 and server2 (multi-create request)
4. attach volume1 to server1
5. attach volume1 to server2
6. swap volume1 to volume2 on server1

When the test tears down, it will cleanup in this order:

1. detach volume1 from server2
2. detach volume1 from server1 (this should 404 since volume1 is not attached to server1 after the swap)
3. delete server1 and server2
4. delete volume2
5. delete volume1

One thing that appears to be missing is tempest does not explicitly detach volume2 from server1 on teardown, so that would have to happen by nova-compute when deleting server1, which is confirmed by:

http://logs.openstack.org/81/606981/4/check/tempest-slow/fafde23/compute1/logs/screen-n-cpu.txt.gz#_Dec_08_01_45_42_745301

in comment 8:

Dec 08 01:45:43.084933 ubuntu-xenial-ovh-gra1-0001066278 nova-compute[20206]: DEBUG nova.virt.libvirt.volume.iscsi [None req-69aba1e7-c922-40f2-8136-6e6e0a8c924e tempest-TestMultiAttachVolumeSwap-1803326020 tempest-TestMultiAttachVolumeSwap-1803326020] [instance: d46fba31-9469-4799-b2bf-1fbad4369a9a] calling os-brick to detach iSCSI Volume {{(pid=20206) disconnect_volume /opt/stack/nova/nova/virt/libvirt/volume/iscsi.py:72}}

So maybe what is happening is tempest doesn't know volume2 is attached to server1, deletes server1 (which detaches volume2) but since tempest isn't waiting for volume2 to be detached before trying to delete it, it just goes to delete volume2 which fails because the backend target is not yet disconnected.