[dvr][fast-exit] a route to a tenant network does not get created in fip namespace if an external network is attached after a tenant network have been attached (race condition)

Bug #1759971 reported by Dmitrii Shcherbakov
12
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Ubuntu Cloud Archive
Fix Released
Medium
Unassigned
Pike
Fix Released
Medium
Unassigned
Queens
Fix Released
Medium
Unassigned
neutron
Fix Released
Undecided
Dmitrii Shcherbakov
neutron (Ubuntu)
Fix Released
Medium
Unassigned
Artful
Invalid
Medium
Unassigned
Bionic
Fix Released
Medium
Unassigned

Bug Description

Overall, similar scenario to https://bugs.launchpad.net/neutron/+bug/1759956 but a different problem.

Relevant agent config options:
http://paste.openstack.org/show/718418/

OpenStack Queens from UCA (xenial, GA kernel, deployed via OpenStack charms), 2 external subnets (one routed provider network), 1 tenant subnet, all subnets in the same address scope to trigger "fast exit" logic.

Tenant subnet cidr: 192.168.100.0/24

openstack address scope create dev
openstack subnet pool create --address-scope dev --pool-prefix 10.232.40.0/21 --pool-prefix 10.232.16.0/21 dev
openstack subnet pool create --address-scope dev --pool-prefix 192.168.100.0/24 tenant
openstack network create --external --provider-physical-network physnet1 --provider-network-type flat pubnet
openstack network segment set --name segment1 d8391bfb-4466-4a45-972c-45ffcec9f6bc
openstack network segment create --physical-network physnet2 --network-type flat --network pubnet segment2
openstack subnet create --no-dhcp --subnet-pool dev --subnet-range 10.232.16.0/21 --allocation-pool start=10.232.17.0,end=10.232.17.255 --dns-nameserver 10.232.36.101 --ip-version 4 --network pubnet --network-segment segment1 pubsubnetl1
openstack subnet create --gateway 10.232.40.100 --no-dhcp --subnet-pool dev --subnet-range 10.232.40.0/21 --allocation-pool start=10.232.41.0,end=10.232.41.255 --dns-nameserver 10.232.36.101 --ip-version 4 --network pubnet --network-segment segment2 pubsubnetl2
openstack network create --internal --provider-network-type vxlan tenantnet
 openstack subnet create --dhcp --ip-version 4 --subnet-range 192.168.100.0/24 --subnet-pool tenant --dns-nameserver 10.232.36.101 --network tenantnet tenantsubnet

# -------
# Works in this order when an external network is attached first

openstack router create --disable --no-ha --distributed pubrouter
openstack router set --disable-snat --external-gateway pubnet --enable pubrouter

openstack router add subnet pubrouter tenantsubnet

2018-03-29 23:30:48.933 2050638 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'ne
tns', 'exec', 'fip-d0f008fc-dc45-4237-9ce0-a9e1977735eb', 'ip', '-4', 'route', 'replace', '192.168.100.0/24', 'via', '169.254.106.114', 'dev', 'fpr-09fd1
424-7'] create_process /usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py:92

# ------
# Doesn't work the other way around - as a fip namespace does not get created before a tenant network is attached
openstack router create --disable --no-ha --distributed pubrouter

openstack router add subnet pubrouter tenantsubnet
openstack router set --disable-snat --external-gateway pubnet --enable pubrouter

# to "fix" this we need to re-trigger the right code path

openstack router remove subnet pubrouter tenantsubnet
openstack router add subnet pubrouter tenantsubnet

description: updated
Revision history for this message
Dmitrii Shcherbakov (dmitriis) wrote :

Tracing neutron-l3-agent via rbdb clears the issue out a bit.

For the case where a tenant network is added before an external network ext_port_addr_scopes is an empty set:

https://paste.ubuntu.com/p/Byjp6fNpdd/

--Call--
> /usr/lib/python2.7/dist-packages/neutron/agent/l3/dvr_local_router.py(590)_check_if_address_scopes_match()
-> def _check_if_address_scopes_match(self, int_port, ex_gw_port):
...
(Pdb) n
> /usr/lib/python2.7/dist-packages/neutron/agent/l3/dvr_local_router.py(592)_check_if_address_scopes_match()
-> int_port_addr_scopes = int_port.get('address_scopes', {})
(Pdb) n
> /usr/lib/python2.7/dist-packages/neutron/agent/l3/dvr_local_router.py(593)_check_if_address_scopes_match()
-> ext_port_addr_scopes = ex_gw_port.get('address_scopes', {})
(Pdb) n
> /usr/lib/python2.7/dist-packages/neutron/agent/l3/dvr_local_router.py(595)_check_if_address_scopes_match()
-> lib_constants.IP_VERSION_6 if self._port_has_ipv6_subnet(int_port)
(Pdb) int_port_addr_scopes
{u'4': u'd5d483bd-b1a1-4d11-8b98-a9697707321e', u'6': None}
(Pdb) ext_port_addr_scopes
{}

For the case where an external network is added first, then a tenant network both int_port_addr_scopes and ext_port_addr_scopes have the same content:

(Pdb) n
> /usr/lib/python2.7/dist-packages/neutron/agent/l3/dvr_local_router.py(593)_check_if_address_scopes_match()
-> ext_port_addr_scopes = ex_gw_port.get('address_scopes', {})
(Pdb) n
> /usr/lib/python2.7/dist-packages/neutron/agent/l3/dvr_local_router.py(595)_check_if_address_scopes_match()
-> lib_constants.IP_VERSION_6 if self._port_has_ipv6_subnet(int_port)

-> lib_constants.IP_VERSION_6 if self._port_has_ipv6_subnet(int_port)
(Pdb) int_port_addr_scopes
{u'4': u'd5d483bd-b1a1-4d11-8b98-a9697707321e', u'6': None}
(Pdb) ext_port_addr_scopes
{u'4': u'd5d483bd-b1a1-4d11-8b98-a9697707321e', u'6': None}

Revision history for this message
Dmitrii Shcherbakov (dmitriis) wrote :
Download full text (5.7 KiB)

Tracing this again I can see that if external network is added after a tenant network ex_gw_port ("network:router_gateway") has a very limited amount of information as opposed to fip_agent_port ("network:floatingip_agent_gateway") which is retrieved from neutron service via an RPC call from neutron-l3-agent:

https://paste.ubuntu.com/p/YYFmbR3JFn/

# virtually no relevant information

> /usr/lib/python2.7/dist-packages/neutron/agent/l3/dvr_local_router.py(577)process_external()
-> if ex_gw_port:
(Pdb) ex_gw_port
{u'allowed_address_pairs': [], u'extra_dhcp_opts': [], u'updated_at': u'2018-04-01T23:50:55Z', u'device_owner': u'network:router_gateway', u'revision_number': 3, u'port_security_enabled': False, u'binding:profile': {}, u'fixed_ips': [], u'id': u'1e34db4a-c33c-4c41-b65c-2256a444a7d7', u'security_groups': [], u'binding:vif_details': {}, u'binding:vif_type': u'unbound', u'mac_address': u'fa:16:3e:b7:6b:17', u'project_id': u'', u'status': u'DOWN', u'binding:host_id': u'pillan', u'description': u'', u'tags': [], u'device_id': u'5edc73a3-3b60-4942-9026-4da0ce01e93d', u'name': u'', u'admin_state_up': True, u'network_id': u'1b78176f-c608-4283-9bdc-e09961805e29', u'tenant_id': u'', u'created_at': u'2018-04-01T23:50:55Z', u'binding:vnic_type': u'normal', u'ip_allocation': u'deferred'}
(Pdb) ex_gw_port.get('address_scopes')
(Pdb) l
572 import rpdb
573 rpdb.set_trace()
574 if self.agent_conf.agent_mode != (
575 n_const.L3_AGENT_MODE_DVR_NO_EXTERNAL):
576 ex_gw_port = self.get_ex_gw_port()
577 -> if ex_gw_port:
578 self.create_dvr_external_gateway_on_agent(ex_gw_port)
579 self.connect_rtr_2_fip()
580 super(DvrLocalRouter, self).process_external()

...

# address_scopes are present along with other information

(Pdb) fip_agent_port
{u'allowed_address_pairs': [], u'extra_dhcp_opts': [], u'updated_at': u'2018-04-01T23:53:47Z', u'device_owner': u'network:floatingip_agent_gateway', u'revision_number': 4, u'port_security_enabled': False, u'binding:profile': {}, u'binding:vnic_type': u'normal', u'fixed_ips': [{u'subnet_id': u'59084723-8290-4cdd-996f-14de3f6eeacb', u'prefixlen': 21, u'ip_address': u'10.232.17.5'}], u'id': u'84a55dbe-def4-4e60-8089-9ecfd74f890a', u'security_groups': [], u'binding:vif_details': {u'port_filter': True, u'datapath_type': u'system', u'ovs_hybrid_plug': False}, u'address_scopes': {u'4': u'd5d483bd-b1a1-4d11-8b98-a9697707321e', u'6': None}, u'binding:vif_type': u'ovs', u'mac_address': u'fa:16:3e:5a:6d:e0', u'project_id': u'', u'status': u'DOWN', u'subnets': [{u'dns_nameservers': [u'10.232.36.101'], u'ipv6_ra_mode': None, u'gateway_ip': u'10.232.16.1', u'cidr': u'10.232.16.0/21', u'id': u'59084723-8290-4cdd-996f-14de3f6eeacb', u'subnetpool_id': u'c7ba4af5-5aca-49e1-abbf-b7072d82d740'}], u'binding:host_id': u'ipotane', u'description': u'', u'tags': [], u'device_id': u'acee51b4-ed49-4fa3-a8de-81dee0f384ba', u'name': u'', u'admin_state_up': True, u'network_id': u'1b78176f-c608-4283-9bdc-e09961805e29', u'tenant_id': u'', u'created_at': u'2018-04-01T23:53:46Z', u'mtu': 1500, u'extra_subnets': [{u'dns_name...

Read more...

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to neutron (master)

Fix proposed to branch: master
Review: https://review.openstack.org/558137

Changed in neutron:
assignee: nobody → Dmitrii Shcherbakov (dmitriis)
status: New → In Progress
tags: added: l3-dvr-backlog
Revision history for this message
Dmitrii Shcherbakov (dmitriis) wrote : Re: [dvr][fast-exit] a route to a tenant network does not get created in fip namespace if an external network is attached after a tenant network have been attached

There seems to be a concurrency issue with how router updates are handled in l3 agents.

It doesn't affect agent_gw_port because it does an explicit RPC call.

If set_trace is set in _process_router_update then the issue does not occur and process_external works as expected:

http://paste.openstack.org/show/PzTrJanyIXm9y7H4h26A/

522 def _process_router_update(self):
523 for rp, update in self._queue.each_update_to_next_router():
524 import rpdb
525 rpdb.set_trace()

{u'enable_snat': False, u'gw_port': {u'allowed_address_pairs': [], u'extra_dhcp_opts': [], u'updated_at': u'2018-04-03T21:37:06Z', u'device_owner': u'network:router_gateway', u'revision_number': 12, u'port_security_enabled': False, u'binding:profile': {}, u'binding:vnic_type': u'normal', u'fixed_ips': [{u'subnet_id': u'ee6795c0-8f06-4fa6-9531-04168403c74f', u'prefixlen': 21, u'ip_address': u'10.232.41.4'}], u'id': u'dde20d26-a285-4ee7-b402-830f6c3cea1b', u'security_groups': [], u'binding:vif_details': {u'port_filter': True, u'datapath_type': u'system', u'ovs_hybrid_plug': True}, u'address_scopes': {u'4': u'd5d483bd-b1a1-4d11-8b98-a9697707321e', u'6': None} <- ...

Setting the same in process_external or not setting it at all results in a 100% reproducer.

    def process_external(self):
        import rpdb
        rpdb.set_trace()
        if self.agent_conf.agent_mode != (
...

apt policy python-eventlet
python-eventlet:
  Installed: 0.18.4-1ubuntu1
  Candidate: 0.18.4-1ubuntu1
  Version table:
 *** 0.18.4-1ubuntu1 500

Revision history for this message
Dmitrii Shcherbakov (dmitriis) wrote :
Download full text (4.0 KiB)

* Localized it further to get_routers in l3 agent which does an RPC call to sync routers - setting a trace before that RPC call clearly makes the problem go away.

* And then to SQL query timing on the API side.

If I place a debug message printing the returned content,

https://github.com/openstack/neutron/blob/fbe308bdc12191c187343b5ef103dea9af738380/neutron/agent/l3/agent.py#L531-L534
            if update.action != queue.DELETE_ROUTER and not router:
                try:
                    update.timestamp = timeutils.utcnow()
                    routers = self.plugin_rpc.get_routers(self.context,
                                                          [update.id])
                    LOG.debug("Got routers via plugin_rpc: %s" % repr(routers)) # <--- added this to print the result without setting a trace above

https://github.com/openstack/neutron/blob/fbe308bdc12191c187343b5ef103dea9af738380/neutron/agent/l3/agent.py#L103-L107
        return cctxt.call(context, 'sync_routers', host=self.host,

I get the following output which does not contain address_scopes for the gateway port:

http://paste.openstack.org/raw/waUJmrtHFOqSnAr74mq3/

The code path on the API side:
neutron/api/rpc/handlers/l3_rpc.py: sync_routers

If I trace the whole sync_routers path from the beginning I get a proper result eventually:

http://paste.openstack.org/show/qGgkUzeFfeBpNM9hVhBL/

If I set a trace at the end of sync_routers (https://github.com/openstack/neutron/blob/fbe308bdc12191c187343b5ef103dea9af738380/neutron/api/rpc/handlers/l3_rpc.py#L129) then I get a reproducer:

https://paste.ubuntu.com/p/J5SJvc3xFF/

'gw_port': {'status': u'DOWN', 'created_at': '2018-04-04T14:07:02Z', 'binding:host_id': u'pillan', 'description': u'', 'allowed_address_pairs': [], 'tags': [], 'extra_dhcp_opts': [], 'updated_at': '2018-04-04T14:07:02Z', 'device_owner': u'network:router_gateway', 'revision_number': 3, 'port_security_enabled': False, 'binding:profile': {}, 'fixed_ips': [], 'id': u'7f0cff2e-a35c-463f-bcb1-1e2357db1bf0', 'security_groups': [], 'device_id': u'ef85572a-e3d0-49d6-a304-2468780b42cf', 'name': u'', 'admin_state_up': True, 'network_id': u'1b78176f-c608-4283-9bdc-e09961805e29', 'tenant_id': u'', 'binding:vif_details': {}, 'binding:vnic_type': u'normal', 'binding:vif_type': u'unbound', 'mac_address': u'fa:16:3e:7e:5e:42', 'project_id': u'', 'ip_allocation': u'deferred'},

If I compare it too a "good" gw_port dict, I can see that the status is "ACTIVE" so looks like when an RPC call to sync router dicts is made too fast and a gateway port is still down a different result is returned.

'gw_port': {'status': u'ACTIVE', 'created_at': '2018-04-04T13:15:37Z', 'binding:host_id': u'pillan', 'description': u'', 'allowed_address_pairs': [], 'tags': [], 'extra_dhcp_opts': [], 'updated_at': '2018-04-04T13:17:49Z', 'device_owner': u'network:router_gateway', 'revision_number': 7, 'port_security_enabled': False, 'binding:profile': {}, 'mtu': 1500, 'fixed_ips': [{'subnet_id': u'ee6795c0-8f06-4fa6-9531-04168403c74f', 'prefixlen': 21, 'ip_address': u'10.232.41.1'}], 'id': u'd6201e21-62e6-466c-b34c-3f342b1e5d3c', 'security_groups': [], 'device_id': u'beea4812-7d9d-4113-aa99-20db45a...

Read more...

summary: [dvr][fast-exit] a route to a tenant network does not get created in fip
namespace if an external network is attached after a tenant network have
- been attached
+ been attached (race condition)
Revision history for this message
Dmitrii Shcherbakov (dmitriis) wrote :

So, to sum up:

* router update -> notification -> l3 agent
< race condition >
* Neutron API <- RPC sync_routers <- l3 agent
< end race condition >
* Neutron API -> return with empty address scopes, gw_port down -> l3 agent
* Neutron API <- RPC get_agent_gateway_port <- l3 agent
* Address scope mismatch -> no routes added

Revision history for this message
Dmitrii Shcherbakov (dmitriis) wrote :

After a gateway port gets created it is marked as unbound:

'binding:vif_type': u'unbound'

https://github.com/openstack/neutron/blob/fbe308bdc12191c187343b5ef103dea9af738380/neutron/db/l3_db.py#L1744-L1778
for port in self._each_port_having_fixed_ips(ports):

And the code path that populates address_scope information does so based on a relation to a particular subnet because there is a check for fixed ip assignment:

https://github.com/openstack/neutron/blob/fbe308bdc12191c187343b5ef103dea9af738380/neutron/db/l3_db.py#L1684
_each_port_having_fixed_ips

* this check is performed after a gateway port is created and persisted to the neutron DB;
* when port has a binding to a host, e.g. 'binding:host_id': u'pillan';
* when vif_type is unbound: 'binding:vif_type': u'unbound'

So, this code path

sync_routers
https://github.com/openstack/neutron/blob/fbe308bdc12191c187343b5ef103dea9af738380/neutron/api/rpc/handlers/l3_rpc.py#L126-L129

    def _ensure_host_set_on_ports(self, context, host, routers):
_ensure_host_set_on_ports
https://github.com/openstack/neutron/blob/fbe308bdc12191c187343b5ef103dea9af738380/neutron/api/rpc/handlers/l3_rpc.py#L131-L147
            if router.get('gw_port') and router.get('distributed'):
...
                self._ensure_host_set_on_port(context,

    def _ensure_host_set_on_port(self, context, host, port, router_id=None,
                                 ha_router_port=False):
https://github.com/openstack/neutron/blob/fbe308bdc12191c187343b5ef103dea9af738380/neutron/api/rpc/handlers/l3_rpc.py#L181-L187
                    self.plugin.update_port(
                        context,
                        port['id'],
                        {'port': {portbindings.HOST_ID: host}})

which calls update_port in ML2 plugin

neutron/plugins/ml2/plugin.py|1295| def update_port(self, context, id, port):

Should take care of binding.

description: updated
description: updated
Revision history for this message
Dmitrii Shcherbakov (dmitriis) wrote :

There are 4 router ports in unbound/DOWN state where device_owner is network:router_gateway

http://paste.openstack.org/show/718419/

There is only one network:router_gateway port that is up.

I think I need to try the same setup but only with one dvr_snat agent present with ml2 config for both physnets and see if there are any differences.

Revision history for this message
Dmitrii Shcherbakov (dmitriis) wrote :

With one dvr_snat node and a single physnet configured on it (otherwise HostConnectedToMultipleSegments is raised) all ports are ACTIVE in the end and there are no extra router_gateway ports but the issue is still present - IoW it's not due to a wrong router_gateway port being fetched.

http://paste.openstack.org/show/718434/

Revision history for this message
Dmitrii Shcherbakov (dmitriis) wrote :

I think it's this portion of code that results in an issue:

https://github.com/openstack/neutron/blob/fbe308bdc12191c187343b5ef103dea9af738380/neutron/api/rpc/handlers/l3_rpc.py#L119-L129

# this populates "routers"
            routers = (
                self.l3plugin.list_active_sync_routers_on_active_l3_agent(
                    context, host, router_ids))

...
        if utils.is_extension_supported(
            self.plugin, constants.PORT_BINDING_EXT_ALIAS):
# ensures the right port binding
            self._ensure_host_set_on_ports(context, host, routers)
...
# the same data structure returned as after list_active_sync_routers_on_active_l3_agent without being refreshed
        return routers

_populate_mtu_and_subnets_for_ports is eventually called on a code path from list_active_sync_routers_on_active_l3_agent and this depends on a binding being other than "unbound".

It seems that we need to do list_active_sync_routers_on_active_l3_agent twice - I confirmed that routes get created if routers variable is refreshed after _ensure_host_set_on_ports before returning from sync_routers.

Revision history for this message
Swaminathan Vasudevan (swaminathan-vasudevan) wrote :

The gateway_port host binding for DVR routers are handled in 'l3_dvr_db', where we specifically pass in the host information for the gateway.
https://github.com/openstack/neutron/blob/master/neutron/db/l3_dvr_db.py#L612

Revision history for this message
Swaminathan Vasudevan (swaminathan-vasudevan) wrote :

So to summarize what you are saying is when external gateway is attached after the tenant network, the address_scopes are not populated in the get_sync_data sent by the server to the agent.

Revision history for this message
Dmitrii Shcherbakov (dmitriis) wrote :

Yes, neither address scope information, nor fixed IP address assignment because the port is still unbound during the first call to list_active_sync_routers_on_active_l3_agent (_build_routers_list is called after some nested calls).

Fixed address allocation is used to include address scope information to the gateway port dict.

So, after we ensure that the gateway port is bound via _ensure_host_set_on_ports (which relies on neutron ML2 core plugin code base) we need to rebuild the list of routers again because router info dicts include external gateway port information and it needs to be refreshed with post-binding data persisted to the neutron DB by ML2.

Revision history for this message
Swaminathan Vasudevan (swaminathan-vasudevan) wrote :

Ok Got it.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to neutron (master)

Reviewed: https://review.openstack.org/558137
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=ff5e8d7d6cdc6ce3cd93aededec512e1855f6c28
Submitter: Zuul
Branch: master

commit ff5e8d7d6cdc6ce3cd93aededec512e1855f6c28
Author: Dmitrii Shcherbakov <email address hidden>
Date: Sun Apr 1 21:02:10 2018 -0400

    Refresh router objects after port binding

    Post-binding information about router ports is missing in results of RPC
    calls made by l3 agents. sync_routers code ensures that bindings are
    present, however, it does not refresh router objects before returning
    them - for RPC clients ports remain unbound before the next sync and
    there is no necessary address scope information present to create routes
    from fip namespaces to qrouter namespaces.

    Change-Id: Ia135f0ed7ca99887d5208fa78fe4df1ff6412c26
    Closes-Bug: #1759971

Changed in neutron:
status: In Progress → Fix Released
Revision history for this message
Dmitrii Shcherbakov (dmitriis) wrote :

Affects Pike and Queens UCA.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to neutron (stable/queens)

Fix proposed to branch: stable/queens
Review: https://review.openstack.org/559494

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to neutron (stable/pike)

Fix proposed to branch: stable/pike
Review: https://review.openstack.org/559496

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to neutron (stable/queens)

Reviewed: https://review.openstack.org/559494
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=e53aa131ab65d50457efb01dee20332f935f15d0
Submitter: Zuul
Branch: stable/queens

commit e53aa131ab65d50457efb01dee20332f935f15d0
Author: Zuul <email address hidden>
Date: Fri Apr 6 22:34:55 2018 +0000

    Refresh router objects after port binding

    Post-binding information about router ports is missing in results of RPC
    calls made by l3 agents. sync_routers code ensures that bindings are
    present, however, it does not refresh router objects before returning
    them - for RPC clients ports remain unbound before the next sync and
    there is no necessary address scope information present to create routes
    from fip namespaces to qrouter namespaces.

     Conflicts:
     neutron/api/rpc/handlers/l3_rpc.py

    Change-Id: Ia135f0ed7ca99887d5208fa78fe4df1ff6412c26
    Closes-Bug: #1759971
    (cherry picked from commit 1ce070c8e2edcdd58b2c032c3223bde318ec1ee9)

tags: added: in-stable-queens
Changed in neutron (Ubuntu Artful):
status: New → Triaged
importance: Undecided → Medium
Changed in neutron (Ubuntu Bionic):
importance: Undecided → Medium
status: New → Triaged
Revision history for this message
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package neutron - 2:12.0.0-0ubuntu3

---------------
neutron (2:12.0.0-0ubuntu3) bionic; urgency=medium

  * d/p/refresh-router-objects-after-port-binding.patch: Cherry-picked
    from upstream stable/queens branch (LP: #1759971).
  * d/p/use-cidr-during-tenant-network-rule-deletion.patch: Cherry-picked
    from upstream stable/queens branch (LP: #1759956).

 -- Corey Bryant <email address hidden> Mon, 16 Apr 2018 16:06:25 -0400

Changed in neutron (Ubuntu Bionic):
status: Triaged → Fix Released
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/neutron 13.0.0.0b1

This issue was fixed in the openstack/neutron 13.0.0.0b1 development milestone.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/neutron 12.0.2

This issue was fixed in the openstack/neutron 12.0.2 release.

tags: added: neutron-proactive-backport-potential
Revision history for this message
Corey Bryant (corey.bryant) wrote :

12.0.2 is available in the Queens cloud archive.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to neutron (stable/pike)

Reviewed: https://review.openstack.org/559496
Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=29e0c7629fd6377539e5c15e4df34672e0bf1691
Submitter: Zuul
Branch: stable/pike

commit 29e0c7629fd6377539e5c15e4df34672e0bf1691
Author: Dmitrii Shcherbakov <email address hidden>
Date: Sun Apr 1 21:02:10 2018 -0400

    Refresh router objects after port binding

    Post-binding information about router ports is missing in results of RPC
    calls made by l3 agents. sync_routers code ensures that bindings are
    present, however, it does not refresh router objects before returning
    them - for RPC clients ports remain unbound before the next sync and
    there is no necessary address scope information present to create routes
    from fip namespaces to qrouter namespaces.

     Conflicts:
     neutron/api/rpc/handlers/l3_rpc.py

    Change-Id: Ia135f0ed7ca99887d5208fa78fe4df1ff6412c26
    Closes-Bug: #1759971
    (cherry picked from commit ff5e8d7d6cdc6ce3cd93aededec512e1855f6c28)

Revision history for this message
Corey Bryant (corey.bryant) wrote :

Artful is EOL

Changed in neutron (Ubuntu Artful):
status: Triaged → Invalid
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/neutron 11.0.6

This issue was fixed in the openstack/neutron 11.0.6 release.

Revision history for this message
Chris MacNaughton (chris.macnaughton) wrote :

I've confirmed that the packages in Pike have this fix applied.

Changed in cloud-archive:
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.