I also had some occurences of this issue and tried to figure out the problem. Just want to share my observations. I refer to [1]
Failing test: test_dualnet_multi_prefix_dhcpv6_stateless
What was happening: For the test 2 instances are required. The first instance was set up successfully with (prepare_server). The error happened while was processing the second one. It has been created, but creating the floating ip failed, as querying the port resulted in an empty list [2] [3]. This is the failing assertion [4]. Right after that the cleanup starts.
Having a look at the flow with "tap3b689f82-b8" of the failing instance "7141979-cae6-40d3-9ca6-2e8ac6b45b63" [5]
2016-01-29 21:58:47.823 [q-agt] Tap device has been added and detected by agent loop [7]
2016-01-29 21:58:49.089 [q-svc] Device details requested from agent [12]
2016-01-29 21:58:52.241 [q-svc] Neutron server got informed, that the device is up [9]
2016-01-29 21:58:52.817 [q-agt] full agent resync triggerred [13]
2016-01-29 21:58:53.976 [q-svc] Device details got requested again (due to agent resync)[11]
2016-01-29 21:58:56,049 [console] tempest test fails as query on port did not return anything useful
2016-01-29 21:58:56,287 [console] Delete has been triggered [8]
2016-01-29 21:58:56.942 [q-agt] Tap disappeared (agent was in the midst of processing it) [6]
2016-01-29 21:59:10.396 [q-svc] Neutron server got informed, that device is down [10]
--> found nothing that caught my attention... the agent resync was triggered due to bug 1532171 as a parallel running testcase deleted an instance...
I also had some occurences of this issue and tried to figure out the problem. Just want to share my observations. I refer to [1]
Failing test: test_dualnet_ multi_prefix_ dhcpv6_ stateless
What was happening: For the test 2 instances are required. The first instance was set up successfully with (prepare_server). The error happened while was processing the second one. It has been created, but creating the floating ip failed, as querying the port resulted in an empty list [2] [3]. This is the failing assertion [4]. Right after that the cleanup starts.
Having a look at the flow with "tap3b689f82-b8" of the failing instance "7141979- cae6-40d3- 9ca6-2e8ac6b45b 63" [5]
2016-01-29 21:58:47.823 [q-agt] Tap device has been added and detected by agent loop [7]
2016-01-29 21:58:49.089 [q-svc] Device details requested from agent [12]
2016-01-29 21:58:52.241 [q-svc] Neutron server got informed, that the device is up [9]
2016-01-29 21:58:52.817 [q-agt] full agent resync triggerred [13]
2016-01-29 21:58:53.976 [q-svc] Device details got requested again (due to agent resync)[11]
2016-01-29 21:58:56,049 [console] tempest test fails as query on port did not return anything useful
2016-01-29 21:58:56,287 [console] Delete has been triggered [8]
2016-01-29 21:58:56.942 [q-agt] Tap disappeared (agent was in the midst of processing it) [6]
2016-01-29 21:59:10.396 [q-svc] Neutron server got informed, that device is down [10]
--> found nothing that caught my attention... the agent resync was triggered due to bug 1532171 as a parallel running testcase deleted an instance...
The query that returns nothing is:
http:// 127.0.0. 1:9696/ v2.0/ports? device_ id=a7141979- cae6-40d3- 9ca6-2e8ac6b45b 63&status= ACTIVE& fixed_ip= None
One of the 3 query attributes must have caused the result to be nothing. There should be at 2 ports being returned!
My proposal would be to extend the logging before that assertion is made and print out a list of all available ports to see why this query is failing...
[1] http:// logs.openstack. org/18/ 246318/ 31/gate/ gate-tempest- dsvm-neutron- linuxbridge/ 15b91f4/ console. html.gz /github. com/openstack/ tempest/ blob/master/ tempest/ scenario/ manager. py#L847 logs.openstack. org/18/ 246318/ 31/gate/ gate-tempest- dsvm-neutron- linuxbridge/ 15b91f4/ console. html.gz# _2016-01- 29_22_12_ 04_625 /github. com/openstack/ tempest/ blob/master/ tempest/ scenario/ manager. py#L825 logs.openstack. org/18/ 246318/ 31/gate/ gate-tempest- dsvm-neutron- linuxbridge/ 15b91f4/ logs/screen- n-cpu.txt. gz#_2016- 01-29_21_ 58_46_036 logs.openstack. org/18/ 246318/ 31/gate/ gate-tempest- dsvm-neutron- linuxbridge/ 15b91f4/ logs/screen- q-agt.txt. gz#_2016- 01-29_21_ 58_56_924 logs.openstack. org/18/ 246318/ 31/gate/ gate-tempest- dsvm-neutron- linuxbridge/ 15b91f4/ logs/screen- q-agt.txt. gz#_2016- 01-29_21_ 58_47_823 logs.openstack. org/18/ 246318/ 31/gate/ gate-tempest- dsvm-neutron- linuxbridge/ 15b91f4/ console. html.gz# _2016-01- 29_22_12_ 04_625 logs.openstack. org/18/ 246318/ 31/gate/ gate-tempest- dsvm-neutron- linuxbridge/ 15b91f4/ logs/screen- q-svc.txt. gz#_2016- 01-29_21_ 58_52_241 logs.openstack. org/18/ 246318/ 31/gate/ gate-tempest- dsvm-neutron- linuxbridge/ 15b91f4/ logs/screen- q-svc.txt. gz#_2016- 01-29_21_ 59_10_396 logs.openstack. org/18/ 246318/ 31/gate/ gate-tempest- dsvm-neutron- linuxbridge/ 15b91f4/ logs/screen- q-svc.txt. gz#_2016- 01-29_21_ 58_53_976 logs.openstack. org/18/ 246318/ 31/gate/ gate-tempest- dsvm-neutron- linuxbridge/ 15b91f4/ logs/screen- q-svc.txt. gz#_2016- 01-29_21_ 58_49_089 logs.openstack. org/18/ 246318/ 31/gate/ gate-tempest- dsvm-neutron- linuxbridge/ 15b91f4/ logs/screen- q-agt.txt. gz#_2016- 01-29_21_ 58_52_817
[2] https:/
[3] http://
[4] https:/
[5] http://
[6] http://
[7] http://
[8] http://
[9] http://
[10] http://
[11] http://
[12] http://
[13] http://