Comment 1 for bug 1876087

Revision history for this message
wes hayutin (weshayutin) wrote : Re: Queens, tempest.scenario.test_network_basic_ops.TestNetworkBasicOps failing. Timeout

This is really hard to identify, but I believe it's related to network, more specifically the openvswitch.
I checked the logs for neutron. The requests were executed properly, no failures, but on openvswitch log, I see the following errors:

2020-04-30 10:06:37.281 80095 DEBUG neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-0274982a-50f5-4940-88e9-9eca4ee53099 - - - - -] Configuring tunnel endpoints to other OVS agents tunnel_sync /usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1930
2020-04-30 10:06:37.283 80095 ERROR neutron.agent.linux.async_process [-] Error received from [ovsdb-client monitor tcp:127.0.0.1:6640 Bridge name --format=json]: net_mlx5: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory
2020-04-30 10:06:37.283 80095 ERROR neutron.agent.linux.async_process [-] Error received from [ovsdb-client monitor tcp:127.0.0.1:6640 Bridge name --format=json]: net_mlx5: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx5)
2020-04-30 10:06:37.283 80095 ERROR neutron.agent.linux.async_process [-] Error received from [ovsdb-client monitor tcp:127.0.0.1:6640 Bridge name --format=json]: PMD: net_mlx4: cannot load glue library: libibverbs.so.1: cannot open shared object file: No such file or directory
2020-04-30 10:06:37.284 80095 ERROR neutron.agent.linux.async_process [-] Error received from [ovsdb-client monitor tcp:127.0.0.1:6640 Bridge name --format=json]: PMD: net_mlx4: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx4)
2020-04-30 10:06:37.285 80095 DEBUG neutron.agent.linux.async_process [-] Output received from [ovsdb-client monitor tcp:127.0.0.1:6640 Bridge name --format=json]: {"data":[["40be3a55-a226-4324-904c-7b085dcd6670","initial","br-ex"],["6cc79323-4dd0-4ad0-8b57-0ea869d51a6c","initial","br-int"],["c24511d9-33c5-445f-ad84-206519bf03f1","initial","br-tun"]],"headings":["row","action","name"]} _read_stdout /usr/lib/python2.7/site-packages/neutron/agent/linux/async_process.py:260

If it is relevant for this bug or not, only the ovs guys can tell for sure.

This error happens before the test be executed, as you can notice that the first request from the test to setup network is the req-2e78b6d1-f293-47cf-9861-9316b92d5a04, but perhaps is affecting how the iptables/routes are handled.

I also found some interesting logs here:

'neutron.api.rpc.agentnotifiers.dhcp_rpc_agent_api.DhcpAgentNotifyAPI._native_event_send_dhcp_notification--9223372036854012965', 'neutron.services.segments.plugin.NovaSegmentNotifier._notify_port_created_or_deleted--9223372036854248080', 'neutron.plugins.ml2.plugin.Ml2Plugin.notify_sg_on_port_change-66733164'] for port, after_create _notify_loop /usr/lib/python2.7/site-packages/neutron_lib/callbacks/manager.py:167
2020-04-30 10:10:31.445 30 DEBUG neutron_lib.callbacks.manager [req-a4c2abfc-c635-4eb6-a547-eb88bba5c952 - - - - -] Notify callbacks ['neutron.services.qos.qos_plugin.QoSPlugin._validate_create_port_callback--9223372036854676145'] for port, precommit_create _notify_loop /usr/lib/python2.7/site-packages/neutron_lib/callbacks/manager.py:167
2020-04-30 10:10:31.452 29 DEBUG oslo_concurrency.lockutils [-] Lock "event-dispatch" acquired by "neutron.plugins.ml2.ovo_rpc.dispatch_events" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273
2020-04-30 10:10:31.578 29 DEBUG neutron.api.rpc.handlers.resources_rpc [req-e4c00aeb-ac40-4d04-b8c9-e0259c0a09aa bfd77da034c54a0cbedd7e5801da3a32 4147846aa9db451680d9b4a035ac2204 - - -] Pushing event updated for resources: {'Port': ['ID=955275ef-1b3a-4ac4-a1e3-12e5c0e6b270,revision_number=5']} push /usr/lib/python2.7/site-packages/neutron/api/rpc/handlers/resources_rpc.py:241
2020-04-30 10:10:31.582 29 DEBUG oslo_concurrency.lockutils [req-e4c00aeb-ac40-4d04-b8c9-e0259c0a09aa bfd77da034c54a0cbedd7e5801da3a32 4147846aa9db451680d9b4a035ac2204 - - -] Lock "event-dispatch" released by "neutron.plugins.ml2.ovo_rpc.dispatch_events" :: held 0.129s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285
2020-04-30 10:10:31.614 29 DEBUG neutron.scheduler.dhcp_agent_scheduler [req-e4c00aeb-ac40-4d04-b8c9-e0259c0a09aa bfd77da034c54a0cbedd7e5801da3a32 4147846aa9db451680d9b4a035ac2204 - default default] Network f0ee86ea-b7a5-4678-981c-762af5e39a9b is already hosted by enough agents. _get_dhcp_agents_hosting_network /usr/lib/python2.7/site-packages/neutron/scheduler/dhcp_agent_scheduler.py:232
2020-04-30 10:10:31.666 29 DEBUG neutron_lib.callbacks.manager [req-e4c00aeb-ac40-4d04-b8c9-e0259c0a09aa bfd77da034c54a0cbedd7e5801da3a32 4147846aa9db451680d9b4a035ac2204 - default default] Notify callbacks ['neutron.services.l3_router.l3_router_plugin.L3RouterPlugin._add_csnat_on_interface_create-653200'] for router_interface, before_create _notify_loop /usr/lib/python2.7/site-packages/neutron_lib/callbacks/manager.py:167
2020-04-30 10:10:31.715 29 DEBUG neutron_lib.callbacks.manager [req-e4c00aeb-ac40-4d04-b8c9-e0259c0a09aa bfd77da034c54a0cbedd7e5801da3a32 4147846aa9db451680d9b4a035ac2204 - default default] Notify callbacks ['neutron.plugins.ml2.plugin.Ml2Plugin._ensure_default_security_group_handler--9223372036788042693'] for port, before_update _notify_loop /usr/lib/python2.7/site-packages/neutron_lib/callbacks/manager.py:167
2020-04-30 10:10:31.726 30 DEBUG neutron.db.provisioning_blocks [req-a4c2abfc-c635-4eb6-a547-eb88bba5c952 - - - - -] Transition to ACTIVE for port object c02079d2-391c-4b16-aa70-c1dd0118969c will not be triggered until provisioned by entity L2. add_provisioning_component /usr/lib/python2.7/site-packages/neutron/db/provisioning_blocks.py:73
2020-04-30 10:10:31.742 30 DEBUG neutron.db.provisioning_blocks [req-a4c2abfc-c635-4eb6-a547-eb88bba5c952 - - - - -] Transition to ACTIVE for port object c02079d2-391c-4b16-aa70-c1dd0118969c will not be triggered until provisioned by entity DHCP. add_provisioning_component /usr/lib/python2.7/site-packages/neutron/db/provisioning_blocks.py:73
2020-04-30 10:10:31.750 30 DEBUG neutron_lib.callbacks.manager [req-a4c2abfc-c635-4eb6-a547-eb88bba5c952 - - - - -] Notify callbacks ['neutron.plugins.ml2.ovo_rpc._ObjectChangeHandler.handle_event-681199', 'neutron.api.rpc.agentnotifiers.dhcp_rpc_agent_api.DhcpAgentNotifyAPI._native_event_send_dhcp_notification--9223372036854012965', 'neutron.services.segments.plugin.NovaSegmentNotifier._notify_port_created_or_deleted--9223372036854248080', 'neutron.plugins.ml2.plugin.Ml2Plugin.notify_sg_on_port_change-66733164'] for port, after_create _notify_loop /usr/lib/python2.7/site-packages/neutron_lib/callbacks/manager.py:167
2020-04-30 10:10:31.758 30 DEBUG oslo_concurrency.lockutils [-] Lock "event-dispatch" acquired by "neutron.plugins.ml2.ovo_rpc.dispatch_events" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273
2020-04-30 10:10:31.905 30 DEBUG neutron.api.rpc.handlers.resources_rpc [req-a4c2abfc-c635-4eb6-a547-eb88bba5c952 - - - - -] Pushing event updated for resources: {'Port': ['ID=c02079d2-391c-4b16-aa70-c1dd0118969c,revision_number=5']} push /usr/lib/python2.7/site-packages/neutron/api/rpc/handlers/resources_rpc.py:241
2020-04-30 10:10:31.912 30 DEBUG oslo_concurrency.lockutils [req-a4c2abfc-c635-4eb6-a547-eb88bba5c952 - - - - -] Lock "event-dispatch" released by "neutron.plugins.ml2.ovo_rpc.dispatch_events" :: held 0.155s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285
2020-04-30 10:10:31.936 30 DEBUG neutron.scheduler.dhcp_agent_scheduler [req-a4c2abfc-c635-4eb6-a547-eb88bba5c952 - - - - -] Network f0ee86ea-b7a5-4678-981c-762af5e39a9b is already hosted by enough agents. _get_dhcp_agents_hosting_network /usr/lib/python2.7/site-packages/neutron/scheduler/dhcp_agent_scheduler.py:232
2020-04-30 10:10:31.949 30 DEBUG neutron.plugins.ml2.managers [req-a4c2abfc-c635-4eb6-a547-eb88bba5c952 - - - - -] Attempting to bind port c02079d2-391c-4b16-aa70-c1dd0118969c on host node-0000074445.localdomain for vnic_type normal with profile bind_port /usr/lib/python2.7/site-packages/neutron/plugins/ml2/managers.py:745
2020-04-30 10:10:31.949 30 DEBUG neutron.plugins.ml2.managers [req-a4c2abfc-c635-4eb6-a547-eb88bba5c952 - - - - -] Attempting to bind port c02079d2-391c-4b16-aa70-c1dd0118969c on host node-0000074445.localdomain at level 0 using segments [{'network_id': 'f0ee86ea-b7a5-4678-981c-762af5e39a9b', 'segmentation_id': 82, 'physical_network': None, 'id': '86d28bc0-799e-44e3-b2bf-74eb42c61440', 'network_type': u'vxlan'}] _bind_port_level /usr/lib/python2.7/site-packages/neutron/plugins/ml2/managers.py:766

And

2020-04-30 10:10:32.641 29 DEBUG neutron.scheduler.l3_agent_scheduler [req-e4c00aeb-ac40-4d04-b8c9-e0259c0a09aa bfd77da034c54a0cbedd7e5801da3a32 4147846aa9db451680d9b4a035ac2204 - default default] Router e1c775e2-8fab-423d-bd61-21599d931f61 has already been hosted by L3 agent 73e8098d-eec3-4481-8dba-01eb57b54049 _get_candidates /usr/lib/python2.7/site-packages/neutron/scheduler/l3_agent_scheduler.py:131