It happens at least twice:
https://17b1c998339a42fd4480-35ecc7b59ac0acd5f8ed77dd8ebc5343.ssl.cf1.rackcdn.com/780926/2/gate/neutron-functional-with-uwsgi/182c299/testr_results.html
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_32f/782553/4/check/neutron-functional-with-uwsgi/32f9987/testr_results.html
Stacktrace:
ft1.19: neutron.tests.functional.agent.l3.test_ha_router.L3HATestCase.test_ipv6_router_advts_and_fwd_after_router_state_change_backuptesttools.testresult.real._StringException: Traceback (most recent call last):
File "/home/zuul/src/opendev.org/openstack/neutron/neutron/common/utils.py", line 704, in wait_until_true
eventlet.sleep(sleep)
File "/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/eventlet/greenthread.py", line 36, in sleep
hub.switch()
File "/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/eventlet/hubs/hub.py", line 313, in switch
return self.greenlet.switch()
eventlet.timeout.Timeout: 60 seconds
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", line 183, in func
return f(self, *args, **kwargs)
File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/agent/l3/test_ha_router.py", line 148, in test_ipv6_router_advts_and_fwd_after_router_state_change_backup
self._test_ipv6_router_advts_and_fwd_helper('backup',
File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/agent/l3/test_ha_router.py", line 118, in _test_ipv6_router_advts_and_fwd_helper
common_utils.wait_until_true(lambda: router.ha_state == 'backup')
File "/home/zuul/src/opendev.org/openstack/neutron/neutron/common/utils.py", line 709, in wait_until_true
raise WaitTimeout(_("Timed out after %d seconds") % timeout)
neutron.common.utils.WaitTimeout: Timed out after 60 seconds
This is the same problem as in [1]: the keepalived- state-change process do not finish the "handle_ initial_ state" execution, stopping any further processing. At this point, the process can be considered as zombie.
From the journal logs [2], from [3], we can see how the process is started and never reaches [4].
I'll mark this LP bug as a duplicated of LP#1917793.
Regards.
[1]https:/ /bugs.launchpad .net/neutron/ +bug/1917793 paste.openstack .org/show/ 804048/ /17b1c998339a42 fd4480- 35ecc7b59ac0acd 5f8ed77dd8ebc53 43.ssl. cf1.rackcdn. com/780926/ 2/gate/ neutron- functional- with-uwsgi/ 182c299/ testr_results. html /github. com/openstack/ neutron/ blob/493286511d 227c7ee81898381 99c673110af1ee8 /neutron/ agent/l3/ keepalived_ state_change. py#L102
[2]http://
[3]https:/
[4]https:/