neutron floatingip-associate on port can cause server exception

Bug #1370795 reported by Brian Haley
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
neutron
Fix Released
High
Brian Haley

Bug Description

Associating a floating IP address with a port, when it's not itself associated with an instance, can cause the neutron server to throw an exception, leaving neutron completely unusable.

Here's how to reproduce it:

1. Start-up devstack, having it clone the latest upstream code, making sure to enable dvr by setting Q_DVR_MODE=dvr_snat
    (this will create a network, subnet, and router and attach it to private and ext-nets)
2. neutron net-list
3. neutron port-create $private_network_id
4. neutron floatingip-create $public_network_id
5. neutron floatingip-associate $floatingip_id $port_id

You'll start seeting this in screen-q-svc.log:

2014-09-17 20:56:17.758 5423 DEBUG neutron.db.l3_dvr_db [req-3faea024-ab6c-46f2-8706-e8b1028616ab None] Floating IP host: None _process_floating_ips /opt/stack/neutron/neutron/db/l3_dvr_db.py:296
2014-09-17 20:56:17.760 5423 ERROR oslo.messaging.rpc.dispatcher [req-3faea024-ab6c-46f2-8706-e8b1028616ab ] Exception during message handling: Agent with agent_type=L3 agent and host=None could not be found
2014-09-17 20:56:17.760 5423 TRACE oslo.messaging.rpc.dispatcher Traceback (most recent call last):
2014-09-17 20:56:17.760 5423 TRACE oslo.messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 134, in _dispatch_and_reply
2014-09-17 20:56:17.760 5423 TRACE oslo.messaging.rpc.dispatcher incoming.message))
2014-09-17 20:56:17.760 5423 TRACE oslo.messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 177, in _dispatch
2014-09-17 20:56:17.760 5423 TRACE oslo.messaging.rpc.dispatcher return self._do_dispatch(endpoint, method, ctxt, args)
2014-09-17 20:56:17.760 5423 TRACE oslo.messaging.rpc.dispatcher File "/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 123, in _do_dispatch
2014-09-17 20:56:17.760 5423 TRACE oslo.messaging.rpc.dispatcher result = getattr(endpoint, method)(ctxt, **new_args)
2014-09-17 20:56:17.760 5423 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/neutron/neutron/api/rpc/handlers/l3_rpc.py", line 78, in sync_routers
2014-09-17 20:56:17.760 5423 TRACE oslo.messaging.rpc.dispatcher context, host, router_ids))
2014-09-17 20:56:17.760 5423 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/neutron/neutron/db/l3_agentschedulers_db.py", line 299, in list_active_sync_routers_on_active_l3_agent
2014-09-17 20:56:17.760 5423 TRACE oslo.messaging.rpc.dispatcher active=True)
2014-09-17 20:56:17.760 5423 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/neutron/neutron/db/l3_hamode_db.py", line 458, in get_ha_sync_data_for_host
2014-09-17 20:56:17.760 5423 TRACE oslo.messaging.rpc.dispatcher active)
2014-09-17 20:56:17.760 5423 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/neutron/neutron/db/l3_dvr_db.py", line 330, in get_sync_data
2014-09-17 20:56:17.760 5423 TRACE oslo.messaging.rpc.dispatcher self._process_floating_ips(context, routers_dict, floating_ips)
2014-09-17 20:56:17.760 5423 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/neutron/neutron/db/l3_dvr_db.py", line 299, in _process_floating_ips
2014-09-17 20:56:17.760 5423 TRACE oslo.messaging.rpc.dispatcher floating_ip['host'])
2014-09-17 20:56:17.760 5423 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/neutron/neutron/db/agents_db.py", line 157, in _get_agent_by_type_and_host
2014-09-17 20:56:17.760 5423 TRACE oslo.messaging.rpc.dispatcher host=host)
2014-09-17 20:56:17.760 5423 TRACE oslo.messaging.rpc.dispatcher AgentNotFoundByTypeHost: Agent with agent_type=L3 agent and host=None could not be found
2014-09-17 20:56:17.760 5423 TRACE oslo.messaging.rpc.dispatcher
2014-09-17 20:56:17.768 5423 ERROR oslo.messaging._drivers.common [req-3faea024-ab6c-46f2-8706-e8b1028616ab ] Returning exception Agent with agent_type=L3 agent and host=None could not be found to caller

And it will just keep repeating as the l3-agent retries the call.

The result is the l3-agent won't be able to do any work.

I have a fix I'll send out for review.

Changed in neutron:
assignee: nobody → Brian Haley (brian-haley)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to neutron (master)

Fix proposed to branch: master
Review: https://review.openstack.org/122298

Changed in neutron:
status: New → In Progress
tags: added: l3-dvr-backlog
Revision history for this message
Armando Migliaccio (armando-migliaccio) wrote :

I don't understand, this exact test case is exercised here:

https://github.com/openstack/tempest/blob/master/tempest/api/network/test_floating_ips.py#L69

How is that possible that we haven't caught it? Digging...

Revision history for this message
Armando Migliaccio (armando-migliaccio) wrote :

I can imagine this can happen if the test finishes before the L3 agent can do a sync of the floating ips

Changed in neutron:
importance: Undecided → High
milestone: none → juno-rc1
Changed in neutron:
assignee: Brian Haley (brian-haley) → Carl Baldwin (carl-baldwin)
Changed in neutron:
assignee: Carl Baldwin (carl-baldwin) → Brian Haley (brian-haley)
Revision history for this message
Kyle Mestery (mestery) wrote :

Leaving this in, we need this one for DVR in Juno-RC1.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to neutron (master)

Reviewed: https://review.openstack.org/122298
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=4478eee9e55193746b43ad5158be5fee29aef8a3
Submitter: Jenkins
Branch: master

commit 4478eee9e55193746b43ad5158be5fee29aef8a3
Author: Brian Haley <email address hidden>
Date: Wed Sep 17 21:48:53 2014 -0400

    Do not lookup l3-agent for floating IP if host=None, dvr issue

    If a floating IP has been associated with a port, but the port
    has not been associated with an instance, attempting to lookup
    the l3-agent hosting it will cause an AgentNotFoundByTypeHost
    exception. Just skip it and go onto the next one.

    Change-Id: If3df9770fa9e2d2eada932ee5f243d3458bf7261
    Closes-Bug: #1370795

Changed in neutron:
status: In Progress → Fix Committed
Thierry Carrez (ttx)
Changed in neutron:
status: Fix Committed → Fix Released
Thierry Carrez (ttx)
Changed in neutron:
milestone: juno-rc1 → 2014.2
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to neutron (feature/lbaasv2)

Fix proposed to branch: feature/lbaasv2
Review: https://review.openstack.org/130864

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to neutron (feature/lbaasv2)
Download full text (72.6 KiB)

Reviewed: https://review.openstack.org/130864
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=c089154a94e5872efc95eab33d3d0c9de8619fe4
Submitter: Jenkins
Branch: feature/lbaasv2

commit 62588957fbeccfb4f80eaa72bef2b86b6f08dcf8
Author: Kevin Benton <email address hidden>
Date: Wed Oct 22 13:04:03 2014 -0700

    Big Switch: Switch to TLSv1 in server manager

    Switch to TLSv1 for the connections to the backend
    controllers. The default SSLv3 is no longer considered
    secure.

    TLSv1 was chosen over .1 or .2 because the .1 and .2 weren't
    added until python 2.7.9 so TLSv1 is the only compatible option
    for py26.

    Closes-Bug: #1384487
    Change-Id: I68bd72fc4d90a102003d9ce48c47a4a6a3dd6e03

commit 17204e8f02fdad046dabdb8b31397289d72c877b
Author: OpenStack Proposal Bot <email address hidden>
Date: Wed Oct 22 06:20:15 2014 +0000

    Imported Translations from Transifex

    For more information about this automatic import see:
    https://wiki.openstack.org/wiki/Translations/Infrastructure

    Change-Id: I58db0476c810aa901463b07c42182eef0adb5114

commit d712663b99520e6d26269b0ca193527603178742
Author: Carl Baldwin <email address hidden>
Date: Mon Oct 20 21:48:42 2014 +0000

    Move disabling of metadata and ipv6_ra to _destroy_router_namespace

    I noticed that disable_ipv6_ra is called from the wrong place and that
    in some cases it was called with a bogus router_id because the code
    made an incorrect assumption about the context. In other case, it was
    never called because _destroy_router_namespace was being called
    directly. This patch moves the disabling of metadata and ipv6_ra in
    to _destroy_router_namespace to ensure they get called correctly and
    avoid duplication.

    Change-Id: Ia76a5ff4200df072b60481f2ee49286b78ece6c4
    Closes-Bug: #1383495

commit f82a5117f6f484a649eadff4b0e6be9a5a4d18bb
Author: OpenStack Proposal Bot <email address hidden>
Date: Tue Oct 21 12:11:19 2014 +0000

    Updated from global requirements

    Change-Id: Idcbd730f5c781d21ea75e7bfb15959c8f517980f

commit be6bd82d43fbcb8d1512d8eb5b7a106332364c31
Author: Angus Lees <email address hidden>
Date: Mon Aug 25 12:14:29 2014 +1000

    Remove duplicate import of constants module

    .. and enable corresponding pylint check now the only offending instance
    is fixed.

    Change-Id: I35a12ace46c872446b8c87d0aacce45e94d71bae

commit 9902400039018d77aa3034147cfb24ca4b2353f6
Author: rajeev <email address hidden>
Date: Mon Oct 13 16:25:36 2014 -0400

    Fix race condition on processing DVR floating IPs

    Fip namespace and agent gateway port can be shared by multiple dvr routers.
    This change uses a set as the control variable for these shared resources
    and ensures that Test and Set operation on the control variable are
    performed atomically so that race conditions do not occur among
    multiple threads processing floating IPs.
    Limitation: The scope of this change is limited to addressing the race
    condition described in the bug report. It may not address other issues
    such as pre-existing issue wit...

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.