Comment 4 for bug 1811004

Revision history for this message
Marios Andreou (marios-b) wrote :

spent some more time poking at logs - the main error I see and in almost all container services is socket error like " Socket error exception [Errno 11] Resource temporarily unavailable read_socket_input /usr/lib/python2.7/site-packages/pyngus/sockets.py:53 " e.g. in [1][2][3]
I initially went looking at nova because from the failed tempest [4] we have "Server 5e36afa0-127c-4675-8fd3-01e808327656 failed to reach ACTIVE status and task state "None" within the required time" but the socket error is seen in all containers.

The same tempest test is passing for scenario 4 at [5] so this isn't a general standalone issue it seems - and furthermore we didn't change something in scenario 3 standalone when this issue began to appear (~2 days ago).

[1] http://logs.openstack.org/98/604298/165/check/tripleo-ci-centos-7-scenario003-standalone/4dd67b6/logs/undercloud/var/log/containers/nova/nova-scheduler.log.txt.gz
[2] http://logs.openstack.org/98/604298/165/check/tripleo-ci-centos-7-scenario003-standalone/4dd67b6/logs/undercloud/var/log/containers/mistral/executor.log.txt.gz
[3] http://logs.openstack.org/98/604298/165/check/tripleo-ci-centos-7-scenario003-standalone/4dd67b6/logs/undercloud/var/log/containers/neutron/server.log.txt.gz
[4] http://logs.openstack.org/98/604298/165/check/tripleo-ci-centos-7-scenario003-standalone/4dd67b6/logs/undercloud/home/zuul/tempest.log.txt.gz#_2019-01-09_09_39_50
[5] http://logs.openstack.org/98/604298/167/check/tripleo-ci-centos-7-scenario004-standalone/0af5ca9/logs/undercloud/home/zuul/tempest.log.txt.gz#_2019-01-10_01_26_33