Fullstack test test_qos.TestBwLimitQoSOvs.test_bw_limit_qos_port_removed failing many times
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
neutron |
Fix Released
|
High
|
Slawek Kaplonski |
Bug Description
It looks that this test fails every time with same reason:
2017-12-13 05:29:35.846 | Captured traceback:
2017-12-13 05:29:35.848 | ~~~~~~~~~~~~~~~~~~~
2017-12-13 05:29:35.850 | Traceback (most recent call last):
2017-12-13 05:29:35.852 | File "neutron/
2017-12-13 05:29:35.854 | return f(self, *args, **kwargs)
2017-12-13 05:29:35.856 | File "neutron/
2017-12-13 05:29:35.858 | self._wait_
2017-12-13 05:29:35.860 | File "neutron/
2017-12-13 05:29:35.862 | self._wait_
2017-12-13 05:29:35.864 | File "neutron/
2017-12-13 05:29:35.866 | lambda: vm.bridge.
2017-12-13 05:29:35.868 | File "neutron/
2017-12-13 05:35:14.907 | raise WaitTimeout("Timed out after %d seconds" % timeout)
2017-12-13 05:35:14.909 | neutron.
It failed 16 times in 24h between 9:00 12.12.2017 and 9:00 13.12.2017: http://
Changed in neutron: | |
status: | New → Confirmed |
importance: | Undecided → High |
Changed in neutron: | |
assignee: | nobody → Slawek Kaplonski (slaweq) |
tags: | added: neutron-proactive-backport-potential |
tags: | removed: neutron-proactive-backport-potential |
I was trying to understand from gate logs what is the reason of this issue and was also trying to reproduce it locally.
It looks for me that it is kind of race when load on test host is big because there is very short time between adding port and removing it.
I was able to reproduce this issue manually once. I will keep digging into it to confirm (or not) if that is really reason of this issue.