And in that case, the two servers involved in the swap from the same multiattach volume are on different hosts. So maybe there is some timing issue involved when this fails.
988b38e6-1556-45b8-94ba-76d5045ef5e9 (server2) builds on the primary node and 2f4078c8-4911-4154-b24b-ea2ca37fcdd3 (server1) builds on the subnode.
Maybe it depends on where volume1 builds? In that test, volume1 is 2fe7ad08-b928-4139-9f9d-a4ee54ac630d which builds on the primary node, and if server1 is on the subnode, then that is one difference from the fail scenario where volume1 and server1 are on the same host.
I'm not sure if multiple hosts is really the main culprit because the test passed on this tempest-slow job:
http:// logs.openstack. org/78/ 606978/ 4/check/ tempest- slow/63b3e0e/
And in that case, the two servers involved in the swap from the same multiattach volume are on different hosts. So maybe there is some timing issue involved when this fails.
In that passing case, these are the two servers:
Body: {"servers": [{"id": "988b38e6- 1556-45b8- 94ba-76d5045ef5 e9", "links": [{"href": "https:/ /10.210. 68.80/compute/ v2.1/servers/ 988b38e6- 1556-45b8- 94ba-76d5045ef5 e9", "rel": "self"}, {"href": "https:/ /10.210. 68.80/compute/ servers/ 988b38e6- 1556-45b8- 94ba-76d5045ef5 e9", "rel": "bookmark"}], "name": "tempest- TestMultiAttach VolumeSwap- server- 1095754502- 2"}, {"id": "2f4078c8- 4911-4154- b24b-ea2ca37fcd d3", "links": [{"href": "https:/ /10.210. 68.80/compute/ v2.1/servers/ 2f4078c8- 4911-4154- b24b-ea2ca37fcd d3", "rel": "self"}, {"href": "https:/ /10.210. 68.80/compute/ servers/ 2f4078c8- 4911-4154- b24b-ea2ca37fcd d3", "rel": "bookmark"}], "name": "tempest- TestMultiAttach VolumeSwap- server- 1095754502- 1"}]} _log_request_full tempest/ lib/common/ rest_client. py:437
988b38e6- 1556-45b8- 94ba-76d5045ef5 e9 (server2) builds on the primary node and 2f4078c8- 4911-4154- b24b-ea2ca37fcd d3 (server1) builds on the subnode.
Maybe it depends on where volume1 builds? In that test, volume1 is 2fe7ad08- b928-4139- 9f9d-a4ee54ac63 0d which builds on the primary node, and if server1 is on the subnode, then that is one difference from the fail scenario where volume1 and server1 are on the same host.