The issue here is that a previous test is blocking libvirtd's single event loop thread (changed in 6.2.0 to event threads per domain FWIW) during the deletion of the instance after the test had already passed:
10135 Oct 01 11:14:33.313929 ubuntu-focal-iweb-mtl01-0026751352 nova-compute[67743]: INFO nova.compute.manager [None req-9a2b5677-0383-480a-bc26-9a70831bd975 tempest-LiveMigrationTest-22574426 tempest-LiveMigrationTest-22574426-project] [instance: 45adbb55-491d-418b-ba68- 7db43d1c235b] Took 240.14 seconds to destroy the instance on the hypervisor.
This leads to the connection between the source and dest during our failing test timing out due to missed heart beats.
I'm going to suggest that we don't run the trunk tests for the time being, move the job back to voting and resolve the above deletion issue with the os-vif and Neutron folks in a fresh bug.
The issue here is that a previous test is blocking libvirtd's single event loop thread (changed in 6.2.0 to event threads per domain FWIW) during the deletion of the instance after the test had already passed:
LiveAutoBlockMi grationV225Test :test_live_ migration_ with_trunk
10135 Oct 01 11:14:33.313929 ubuntu- focal-iweb- mtl01-002675135 2 nova-compute[ 67743]: INFO nova.compute. manager [None req-9a2b5677- 0383-480a- bc26-9a70831bd9 75 tempest- LiveMigrationTe st-22574426 tempest- LiveMigrationTe st-22574426- project] [instance: 45adbb55- 491d-418b- ba68- 7db43d1c235b] Took 240.14 seconds to destroy the instance on the hypervisor.
This leads to the connection between the source and dest during our failing test timing out due to missed heart beats.
I'm going to suggest that we don't run the trunk tests for the time being, move the job back to voting and resolve the above deletion issue with the os-vif and Neutron folks in a fresh bug.