gate-tempest-devstack-vm-quantum fail with keyerror in compute log

Bug #1185834 reported by Hans Lindgren
36
This bug affects 8 people
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Confirmed
High
Unassigned
neutron
Confirmed
High
Unassigned

Bug Description

I have seen this happen a couple of times, recently here http://logs.openstack.org/31001/2/check/gate-tempest-devstack-vm-quantum/26260/ and here http://logs.openstack.org/31015/2/check/gate-tempest-devstack-vm-quantum/26286/

Tempest reports several of these exceptions for various tests:
BuildErrorException: Server %(server_id)s failed to build and is in ERROR status

Upon inspecting the compute log, there are multiple KeyError exceptions for 'floatingips' or 'ports' from when nova tries to setup its networking. The source of the errors is python-quantumclient.

Example traceback:

2013-05-30 12:06:17.608 ERROR nova.compute.manager [req-047d428b-e790-4174-bb42-f760371b5c50 demo demo] [instance: d9c89c2d-002d-40c0-bc50-aefeb49504f7] Instance failed network setup
2013-05-30 12:06:17.608 23143 TRACE nova.compute.manager [instance: d9c89c2d-002d-40c0-bc50-aefeb49504f7] Traceback (most recent call last):
2013-05-30 12:06:17.608 23143 TRACE nova.compute.manager [instance: d9c89c2d-002d-40c0-bc50-aefeb49504f7] File "/opt/stack/new/nova/nova/compute/manager.py", line 1118, in _allocate_network
2013-05-30 12:06:17.608 23143 TRACE nova.compute.manager [instance: d9c89c2d-002d-40c0-bc50-aefeb49504f7] security_groups=security_groups)
2013-05-30 12:06:17.608 23143 TRACE nova.compute.manager [instance: d9c89c2d-002d-40c0-bc50-aefeb49504f7] File "/opt/stack/new/nova/nova/network/api.py", line 48, in wrapper
2013-05-30 12:06:17.608 23143 TRACE nova.compute.manager [instance: d9c89c2d-002d-40c0-bc50-aefeb49504f7] res = f(self, context, *args, **kwargs)
2013-05-30 12:06:17.608 23143 TRACE nova.compute.manager [instance: d9c89c2d-002d-40c0-bc50-aefeb49504f7] File "/opt/stack/new/nova/nova/network/quantumv2/api.py", line 285, in allocate_for_instance
2013-05-30 12:06:17.608 23143 TRACE nova.compute.manager [instance: d9c89c2d-002d-40c0-bc50-aefeb49504f7] nw_info = self._get_instance_nw_info(context, instance, networks=nets)
2013-05-30 12:06:17.608 23143 TRACE nova.compute.manager [instance: d9c89c2d-002d-40c0-bc50-aefeb49504f7] File "/opt/stack/new/nova/nova/network/quantumv2/api.py", line 377, in _get_instance_nw_info
2013-05-30 12:06:17.608 23143 TRACE nova.compute.manager [instance: d9c89c2d-002d-40c0-bc50-aefeb49504f7] nw_info = self._build_network_info_model(context, instance, networks)
2013-05-30 12:06:17.608 23143 TRACE nova.compute.manager [instance: d9c89c2d-002d-40c0-bc50-aefeb49504f7] File "/opt/stack/new/nova/nova/network/quantumv2/api.py", line 854, in _build_network_info_model
2013-05-30 12:06:17.608 23143 TRACE nova.compute.manager [instance: d9c89c2d-002d-40c0-bc50-aefeb49504f7] network_IPs = self._nw_info_get_ips(client, port)
2013-05-30 12:06:17.608 23143 TRACE nova.compute.manager [instance: d9c89c2d-002d-40c0-bc50-aefeb49504f7] File "/opt/stack/new/nova/nova/network/quantumv2/api.py", line 783, in _nw_info_get_ips
2013-05-30 12:06:17.608 23143 TRACE nova.compute.manager [instance: d9c89c2d-002d-40c0-bc50-aefeb49504f7] client, fixed_ip['ip_address'], port['id'])
2013-05-30 12:06:17.608 23143 TRACE nova.compute.manager [instance: d9c89c2d-002d-40c0-bc50-aefeb49504f7] File "/opt/stack/new/nova/nova/network/quantumv2/api.py", line 721, in _get_floating_ips_by_fixed_and_port
2013-05-30 12:06:17.608 23143 TRACE nova.compute.manager [instance: d9c89c2d-002d-40c0-bc50-aefeb49504f7] port_id=port)
2013-05-30 12:06:17.608 23143 TRACE nova.compute.manager [instance: d9c89c2d-002d-40c0-bc50-aefeb49504f7] File "/opt/stack/new/python-quantumclient/quantumclient/v2_0/client.py", line 108, in with_params
2013-05-30 12:06:17.608 23143 TRACE nova.compute.manager [instance: d9c89c2d-002d-40c0-bc50-aefeb49504f7] ret = self.function(instance, *args, **kwargs)
2013-05-30 12:06:17.608 23143 TRACE nova.compute.manager [instance: d9c89c2d-002d-40c0-bc50-aefeb49504f7] File "/opt/stack/new/python-quantumclient/quantumclient/v2_0/client.py", line 441, in list_floatingips
2013-05-30 12:06:17.608 23143 TRACE nova.compute.manager [instance: d9c89c2d-002d-40c0-bc50-aefeb49504f7] **_params)
2013-05-30 12:06:17.608 23143 TRACE nova.compute.manager [instance: d9c89c2d-002d-40c0-bc50-aefeb49504f7] File "/opt/stack/new/python-quantumclient/quantumclient/v2_0/client.py", line 1022, in list
2013-05-30 12:06:17.608 23143 TRACE nova.compute.manager [instance: d9c89c2d-002d-40c0-bc50-aefeb49504f7] res.extend(r[collection])
2013-05-30 12:06:17.608 23143 TRACE nova.compute.manager [instance: d9c89c2d-002d-40c0-bc50-aefeb49504f7] KeyError: 'floatingips'
2013-05-30 12:06:17.608 23143 TRACE nova.compute.manager [instance: d9c89c2d-002d-40c0-bc50-aefeb49504f7]
2013-05-30 12:06:17.609 DEBUG nova.compute.claims [req-047d428b-e790-4174-bb42-f760371b5c50 demo demo] [instance: d9c89c2d-002d-40c0-bc50-aefeb49504f7] Aborting claim: [Claim: 64 MB memory, 0 GB disk, 1 VCPUS] abort /opt/stack/new/nova/nova/compute/claims.py:91

Tags: api network
Revision history for this message
Aaron Rosen (arosen) wrote :

This is very weird. Here is the quantum api call that is being made: http://logs.openstack.org/31015/2/check/gate-tempest-devstack-vm-quantum/26286/logs/screen-n-cpu.txt.gz (above the exception):

REQ: curl -i http://127.0.0.1:9696//v2.0/floatingips.json?fixed_ip_address=10.1.0.11&port_id=85778a9e-d706-40b9-bfd8-d07d8652122e -X GET -H "X-Auth-Token: <snip>=" -H "Content-Type: application/json" -H "Accept: application/json" -H "User-Agent: python-quantumclient"

But the response is:

2013-05-30 13:16:12.702 21420 DEBUG quantumclient.client [-] RESP:{'date': 'Thu, 30 May 2013 13:16:12 GMT', 'status': '200', 'content-length': '16', 'content-type': 'application/json; charset=UTF-8', 'content-location': 'http://127.0.0.1:9696//v2.0/floatingips.json?fixed_ip_address=10.1.0.11&port_id=85778a9e-d706-40b9-bfd8-d07d8652122e'} {"networks": []}

It should be {"floatingips": []} not networks.

I'm not able to reproduce this on my box passing in invalid data though :/

no longer affects: python-quantumclient
tags: added: api
Changed in quantum:
importance: Undecided → High
Revision history for this message
Tatyanka (tatyana-leontovich) wrote :
Download full text (12.8 KiB)

Seems I've faced with this too:
2013-05-31 19:45:01.424 21827 DEBUG quantumclient.client [-] RESP:{'date': 'Fri, 31 May 2013 19:45:01 GMT', 'status': '200', 'content-length': '16', 'content-type': 'application/json; charset=UTF-8', 'content-location': 'http://127.0.0.1:9696//v2.0/floatingips.json?fixed_ip_address=10.10.0.4&port_id=1076a6fb-1cfb-4018-a17c-53f15bc9b7e7'} {"networks": []}
 http_log_resp /opt/stack/new/python-quantumclient/quantumclient/common/utils.py:179
2013-05-31 19:45:01.425 DEBUG nova.openstack.common.rpc.amqp [req-8a759901-b4cd-4131-9d8d-b59ad97c4b53 demo1 demo1] Making synchronous call on conductor ... multicall /opt/stack/new/nova/nova/openstack/common/rpc/amqp.py:586
2013-05-31 19:45:01.426 DEBUG nova.openstack.common.rpc.amqp [req-8a759901-b4cd-4131-9d8d-b59ad97c4b53 demo1 demo1] MSG_ID is c3369d733c294f6f9c1a15818b001d56 multicall /opt/stack/new/nova/nova/openstack/common/rpc/amqp.py:589
2013-05-31 19:45:01.427 DEBUG nova.openstack.common.rpc.amqp [req-8a759901-b4cd-4131-9d8d-b59ad97c4b53 demo1 demo1] UNIQUE_ID is c684d43d93014ca6a4609d7c37be12c7. _add_unique_id /opt/stack/new/nova/nova/openstack/common/rpc/amqp.py:337
2013-05-31 19:45:01.451 DEBUG nova.openstack.common.rpc.amqp [req-8a759901-b4cd-4131-9d8d-b59ad97c4b53 demo1 demo1] Making synchronous call on conductor ... multicall /opt/stack/new/nova/nova/openstack/common/rpc/amqp.py:586
2013-05-31 19:45:01.451 DEBUG nova.openstack.common.rpc.amqp [req-8a759901-b4cd-4131-9d8d-b59ad97c4b53 demo1 demo1] MSG_ID is 5c25e5df74154f21b27c80e3836c3dc3 multicall /opt/stack/new/nova/nova/openstack/common/rpc/amqp.py:589
2013-05-31 19:45:01.452 DEBUG nova.openstack.common.rpc.amqp [req-8a759901-b4cd-4131-9d8d-b59ad97c4b53 demo1 demo1] UNIQUE_ID is e2d5836f19474aa19541d96391bce5a4. _add_unique_id /opt/stack/new/nova/nova/openstack/common/rpc/amqp.py:337
2013-05-31 19:45:01.471 DEBUG nova.openstack.common.rpc.amqp [req-8a759901-b4cd-4131-9d8d-b59ad97c4b53 demo1 demo1] Making synchronous call on conductor ... multicall /opt/stack/new/nova/nova/openstack/common/rpc/amqp.py:586
2013-05-31 19:45:01.471 DEBUG nova.openstack.common.rpc.amqp [req-8a759901-b4cd-4131-9d8d-b59ad97c4b53 demo1 demo1] MSG_ID is 93d07cd8611140328bfc53d0a4cca8d6 multicall /opt/stack/new/nova/nova/openstack/common/rpc/amqp.py:589
2013-05-31 19:45:01.472 DEBUG nova.openstack.common.rpc.amqp [req-8a759901-b4cd-4131-9d8d-b59ad97c4b53 demo1 demo1] UNIQUE_ID is 1898c836285a4867bd281722139321aa. _add_unique_id /opt/stack/new/nova/nova/openstack/common/rpc/amqp.py:337
2013-05-31 19:45:01.495 ERROR nova.openstack.common.rpc.amqp [req-8a759901-b4cd-4131-9d8d-b59ad97c4b53 demo1 demo1] Exception during message handling
2013-05-31 19:45:01.495 21827 TRACE nova.openstack.common.rpc.amqp Traceback (most recent call last):
2013-05-31 19:45:01.495 21827 TRACE nova.openstack.common.rpc.amqp File "/opt/stack/new/nova/nova/openstack/common/rpc/amqp.py", line 433, in _process_data
2013-05-31 19:45:01.495 21827 TRACE nova.openstack.common.rpc.amqp **args)
2013-05-31 19:45:01.495 21827 TRACE nova.openstack.common.rpc.amqp File "/opt/stack/new/nova/nova/openstack/common/rpc/dispatcher.py", line 172, in dispatch
2013-05-31 19:...

Revision history for this message
aeva black (tenbrae) wrote :
Revision history for this message
Rick Harris (rconradharris) wrote :

I'm seeing a KeyError for both 'floatingips' and 'ports'

2013-06-07 03:14:45.212 22093 TRACE nova.openstack.common.rpc.amqp ret = self.function(instance, *args, **kwargs)
2013-06-07 03:14:45.212 22093 TRACE nova.openstack.common.rpc.amqp File "/opt/stack/new/python-quantumclient/quantumclient/v2_0/client.py", line 262, in list_ports
2013-06-07 03:14:45.212 22093 TRACE nova.openstack.common.rpc.amqp **_params)
2013-06-07 03:14:45.212 22093 TRACE nova.openstack.common.rpc.amqp File "/opt/stack/new/python-quantumclient/quantumclient/v2_0/client.py", line 1022, in list
2013-06-07 03:14:45.212 22093 TRACE nova.openstack.common.rpc.amqp res.extend(r[collection])
2013-06-07 03:14:45.212 22093 TRACE nova.openstack.common.rpc.amqp KeyError: 'ports'

Revision history for this message
Dirk Mueller (dmllr) wrote :

is this perhaps related to bug 1097203? I've pushed a changeset that requires greenlet 0.3.2 in quantum, perhaps it helps

Revision history for this message
Armando Migliaccio (armando-migliaccio) wrote :
Download full text (13.4 KiB)

I have been looking into this as well, and it does not help when I say that I am unable to reproduce.

However I found something interesting on a failure that occurred on this gate job:

http://logs.openstack.org/32534/4/gate/gate-tempest-devstack-vm-quantum/30188/

If I look at tempest, the failure is due to a timeout on waiting for the VM to be rescued. Time-wise this is completely unrelated to the recurring KeyError issue we see in the nova-compute log. However, and this is where it gets interesting...I see these traces in the log:

2013-06-13 06:18:48.596 22325 DEBUG quantumclient.client [-] RESP:{'status': '400', 'content-length': 311, 'content-type': 'text/plain'} Second simultaneous read on fileno 24 detected. Unless you really know what you're doing, make sure that only one greenthread can read any particular socket. Consider using a pools.Pool. If you do know what you're doing and want to disable this error, call eventlet.debug.hub_multiple_reader_prevention(False)
 http_log_resp /opt/stack/new/python-quantumclient/quantumclient/common/utils.py:179
2013-06-13 06:18:48.596 22325 DEBUG quantumclient.v2_0.client [-] Error message: Second simultaneous read on fileno 24 detected. Unless you really know what you're doing, make sure that only one greenthread can read any particular socket. Consider using a pools.Pool. If you do know what you're doing and want to disable this error, call eventlet.debug.hub_multiple_reader_prevention(False) _handle_fault_response /opt/stack/new/python-quantumclient/quantumclient/v2_0/client.py:899
2013-06-13 06:18:48.596 22325 DEBUG nova.openstack.common.loopingcall [-] Dynamic looping call sleeping for 60.00 seconds _inner /opt/stack/new/nova/nova/openstack/common/loopingcall.py:132
2013-06-13 06:18:48.630 22325 DEBUG quantumclient.client [-] RESP:{'date': 'Thu, 13 Jun 2013 06:18:48 GMT', 'status': '200', 'content-length': '596', 'content-type': 'application/json; charset=UTF-8', 'content-location': 'http://127.0.0.1:9696//v2.0/ports.json?tenant_id=de8db2b90a724e728e2186db1c3e1235&device_id=ba7c58d0-4237-481b-b066-9511a9b85f8c'} {"ports": [{"status": "ACTIVE", "binding:host_id": null, "name": "", "admin_state_up": true, "network_id": "772daa7b-467a-423d-8dfa-49de7fd97f29", "tenant_id": "de8db2b90a724e728e2186db1c3e1235", "binding:vif_type": "ovs", "device_owner": "compute:None", "binding:capabilities": {"port_filter": true}, "mac_address": "fa:16:3e:eb:65:cc", "fixed_ips": [{"subnet_id": "e72648e8-ce27-4090-89bf-28f3c7b58cdb", "ip_address": "10.1.0.16"}], "id": "22525b73-6012-4d63-8535-868e7f6eada5", "security_groups": ["055711d3-be14-439c-aa15-b743dc20d37c"], "device_id": "ba7c58d0-4237-481b-b066-9511a9b85f8c"}]}
 http_log_resp /opt/stack/new/python-quantumclient/quantumclient/common/utils.py:179
2013-06-13 06:18:48.631 22325 DEBUG quantumclient.client [-]
REQ: curl -i http://127.0.0.1:9696/v2.0/networks.json?tenant_id=de8db2b90a724e728e2186db1c3e1235&shared=False -X GET -H "X-Auth-Token: <snip>" -H "Content-Type: application/json" -H "Accept: application/json" -H "User-Agent: python-quantumclient"
 http_log_req /opt/stack/new/python-quantumclient/quantumclient/common/utils.py:173
2013-06-13 06:18:48.6...

Revision history for this message
Hans Lindgren (hanlind) wrote :

Armando,

I think that is an instance of bug 1189656 that just happened to occur in the same test run.

Revision history for this message
Armando Migliaccio (armando-migliaccio) wrote :

Then I wonder if the issue we're seeing with the gating is just an effect of bug 1189656.

Revision history for this message
Hua Zhang (zhhuabj) wrote :

maybe the error was being caused by the quota for floating IPs being exhausted. quantum has not yet coded up to catch all of those exceptions. can try bellow configuration.
quota_port = -1

Sean Dague (sdague)
Changed in nova:
importance: Undecided → High
status: New → Confirmed
Changed in quantum:
status: New → Confirmed
Revision history for this message
Sean Dague (sdague) wrote :

Because the patch that Russell spun was on my copy of the bug, I'm duping in that direction to let that close things out.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.