[RFE] Enable other subprojects to extend l2pop fdb information

Bug #1793653 reported by ChenjieXu
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
neutron
Won't Fix
Wishlist
ChenjieXu

Bug Description

Layer 2 population (l2pop) mechanism driver implements the ML2 driver to improve
open source plugins overlay implementations (VXLAN with Linux bridge and
GRE/VXLAN with OVS)[1]. L2pop avoid the broadcast in mac learning and ARP resolution by prepopulate the bridge forwarding table[2]. However, some projects connect neutron network with outside networks, for example, project bgpvpn-networking[3] can interconnect BGP/MPLS VPNs to Openstack Neutron networks, routers and ports[3]. For the connection to the outside networks, l2pop may need to provide more information than what it does now.

This rfe proposes to add an extension registration mechanism to l2pop so that other subprojects can extend the information included in a full FDB messages before it is sent to the agent.

Problem Description

Some projects connect neutron network with outside networks. For example, project bgpvpn-networking aims at supporting inter-connection between L3VPNs and Neutron resources, i.e. Networks, Routers and Ports[4]. A typical use-case is the following: a tenant already having a BGP IP VPN (a set of external sites) setup outside the datacenter, and by using the project bgpvpn-networking they can establish the connectivity between VMs and these VPN sites.

Figure-1 illustrates an environment that an openstack deployed in the DataCenter and BGPVPN-1 is a bgp-based vpn. The openstack has enables l2pop and bgpvpn. When a vm first sends a packet to some device in the bgpvpn, broadcast for mac learning and ARP resolution can’t be avoided. The use case is listed below:
Use Case
The cloud/network admin creates BGPVPN-1 for a tenant based on contract and OSS information about the VPN for this tenant
The tenant associates BGPVPN-1 with network-1(VM-11 belongs to network-1)
The vm-11 in network-1 first sends a packet to the device-1 in BGPVPN-1.

                   (You can find the figure in the comment#1)
                                    Figure-1

In the step 3, broadcast for mac learning and ARP resolution can’t be avoided. Because neutron doesn’t have the port information of the device in the bgpvpn, thus the fdbs sent by neutron won’t include the port information of the Device-1. As a result of that, before step 3, there are no flows related to Device-1 existing in the Host 1. So ARP request are sent out.

Therefore, we are introducing an extension registration mechanism which enables other subprojects such as project bgpvpn-networking can register their own function to l2pop. The registered function can add the port information in the outside networks to the fdbs. Thus the broadcast for mac learning and ARP resolution can be avoided.

Proposed Change

The idea is to add an extension registration mechanism to l2pop mech_driver.py so that other subprojects can register their own function to l2pop. A global variable l2pop_fdb_extend_funcs should be created to store the registered function by other subprojects. Function register_fdb_extend_func and run_fdb_extend_funcs should created. Function register_fdb_extend_func should be used by other subprojects to register function to l2pop. For example, project bgpvpn-networking can import the l2pop_driver by line 1 and define a new function to register its own function bgpvpn_fdb_extend_func to l2pop_driver by line 2 and 3.
1 from neutron.plugins.ml2.drivers.l2pop import mech_driver as l2pop_driver
2 def register_callbacks(self):
3 l2pop_driver.register_fdb_extend_func( constants.BGPVPN,
                                                         self.bgpvpn_fdb_extend_func)
Function run_fdb_extend_funcs should be called every time full fdbs are created. Thus the pre-registered functions stored in the global variable l2pop_fdb_extend_funcs will be called to extend the newly created fdbs. Thus subprojects can extend the information included in the fdb by such mechanism. All changes can be viewed through the link below:
https://review.openstack.org/#/q/I87450e332c1a6d8bd529eb8082292e73c533676e

Data Model Impact
None

REST API Impact
None

Command Line Client Impact
None

Other Impact
None

Other Deployer Impact
None

Performance Impact
Performance testing should be conducted to see what is the overhead of adding more information to fdb.

Implementation
Assignee(s)

Work Items
Add the register_fdb_extend_func and run_fdb_extend_funcs functions to l2pop mech_driver.py.
Add related tests.
Documentation.

Dependencies
None

Testing
Unit tests, functional tests and scenario tests are necessary.

Documentation Impact
How to register the fdb_extend_function to l2pop should be documented.

References
[1] https://github.com/openstack/neutron/tree/master/neutron/plugins/ml2/drivers/l2pop
[2] https://wiki.openstack.org/wiki/L2population_blueprint
[3] https://github.com/openstack/networking-bgpvpn
[4] https://docs.openstack.org/networking-bgpvpn/latest/user/overview.html

ChenjieXu (midone)
description: updated
ChenjieXu (midone)
description: updated
ChenjieXu (midone)
description: updated
Revision history for this message
ChenjieXu (midone) wrote :
description: updated
description: updated
ChenjieXu (midone)
description: updated
description: updated
description: updated
ChenjieXu (midone)
Changed in neutron:
assignee: nobody → ChenjieXu (midone)
Boden R (boden)
tags: added: rfe
Changed in neutron:
status: New → In Progress
ChenjieXu (midone)
summary: - [RFE] Enable other projects to extend l2pop fdb information
+ [RFE] Enable other subprojects to extend l2pop fdb information
Revision history for this message
Slawek Kaplonski (slaweq) wrote :

Can You check BP https://blueprints.launchpad.net/neutron/+spec/native-l2pop
I think that Your proposal may be easier to implement if that BP would be first done.

Revision history for this message
Miguel Lavalle (minsel) wrote :

Let's keep the status as "New" so it shows up in the RFEs screen

Changed in neutron:
importance: Undecided → Wishlist
status: In Progress → New
Revision history for this message
ChenjieXu (midone) wrote :

Hi Slawek, Thanks for your comment! I went to see the BP you provided and the RFE inside. But it seems that the current progress of the BP is not very fast. The last comment for the RFE inside is on 2017-5-18. Maybe we can implement this RFE first?

Revision history for this message
Slawek Kaplonski (slaweq) wrote :

Hi ChenjieXu, I know that this BP is not in progress currently. I just wanted to point that there is something like that and ask if that maybe can help to achieve what You are proposing.
IMO it could and if we would have L2pop infos directly in port, like mentioned BP is proposing, we already have some mechanisms to extend port's info with other informations, so Your proposal would be then easier to do.
What do You think about it?
And also what is the opinion of others about that?

Revision history for this message
ChenjieXu (midone) wrote :

Hi Slawek, the referenced BP for native-l2pop I don’t think helps in this use-case since the extension framework being proposed for l2pop fdb entries is required for the BGP VPN use-case which is directly managing the FDB entries for l2pop. The native-l2pop wants to move the information to the port object and provide custom extensions at that level, but I think this use-case is best served with extensions to the FDB.

Changed in neutron:
status: New → In Progress
Revision history for this message
YAMAMOTO Takashi (yamamoto) wrote :

what exactly do you want to do?
representing an extra set of (mac, ip) pairs for a neutron network?
("extra" in a sense that they don't belong to neutron ports.)

to me, it sounds a little strange to use l2pop directly for your purpose.
because i thought bgpvpn was supported by some of non l2-agent based backends like odl.
isn't it a problem for them?

Revision history for this message
ChenjieXu (midone) wrote :

Hi yamamoto, thank you for your comment!

For your first question "representing an extra set of (mac, ip) pairs for a neutron network?"
Yes, it is. But also needing host ip or gateway ip. For now l2pop uses:
ports:{agent_ip_1: [(mac_1, ip_1),(mac_2, ip_2)], agent_ip_2: [(mac_3, ip_3),(mac_4, ip_4)]}
The host ip or gateway ip will be used as agent_ip.

For your second question "bgpvpn was supported by some of non l2-agent based backends like odl"
The l2pop fdbs can be used only by l2-agent. Thus the extra information is useful to neutron network. In the use case provided in the RFE, this information can help avoid broadcast when the vm in neutron network try to send packet to the device in bgpvpn. Due to networking-bgpvpn doesn't use l2-agent, when the device try to send packet to the vm in neutron network. It will still need broadcast.

Revision history for this message
Miguel Lavalle (minsel) wrote :

Hi,

I have some questions:

1) Is your use case focused around networking-bgpvpn? Or do you have use cases around other sub-projects?

2) How much of a performance issue are the ARPs in your use case? Have you quantified it? Does it justify this proposal?

3) Do you know why this hasn't been an issue for the networking-bgpvpn team? I am going to ask Thomas Morin (networking-bgpvpn lead) look at this RFE

4) Is this requirement related to StarlingX? If yes, can you provide more background as to how this may benefit StarlingX?

Revision history for this message
Miguel Lavalle (minsel) wrote :

Sent an email to Thomas. Let's see if he can give some additional insight into this proposal

Revision history for this message
ChenjieXu (midone) wrote :

Hi Miguel,

Thank you for your comments! My reply is listed below:

1) The use case in the RFE focused around networking-bgpvpn, but I have another use case that neutron itself can also use this ability to extend l2pop fdbs. An RFE "Add l2pop support for floating ip resources"(For the sake of convenience we can call it RFE_FLOATINGIP) has been drafted and is under internal review for now. RFE_FLOATINGIP proposes to add l2pop support for floating ip resources. RFE_FLOATINGIP can avoid broadcast for scenarios when two VM instances, residing on different networks, communicate via their respective floating ip addresses. The ability to extend l2pop fdbs provided by RFE "Enable other subprojects to extend l2pop fdb information"(RFE_L2POP) can be used by the RFE_FLOATINGIP. In this use case, the ability to extend l2pop fdbs is used by neutron itself.

2) I haven't done the performance test. This RFE is a patch from starlingX. I will send an email to WindRiver to ask about the performance issues.

3) Thank you for your email! I didn't ask networking-bgpvpn team. And I think it's good to ask for their advice.

4) Yes, this is a patch from StarlingX. This patch is used by stx-networking-bgpvpn which is a subproject of StarlingX. By registering the callback, stx-networking-bgpvpn can add the fdbs of the devices in network bgpvpn. These information can help avoid broadcast when the vm in neutron network try to send packet to the device in bgpvpn.

Revision history for this message
Miguel Lavalle (minsel) wrote :

Hi,

Thanks for your response. Let's bring it up to the drivers meeting at the next opportunity

tags: added: rfe-triaged
tags: removed: rfe
Revision history for this message
ChenjieXu (midone) wrote :

Hi Miguel,

Thank you for your proposal! Let's bring it up to the drivers meeting at the next opportunity!

Revision history for this message
Allain Legacy (alegacy) wrote :

The primary motivation for this change is about finding a way to push (and pull) port MAC/IP information to other subprojects. Since the requirements for publishing MAC/IP information for BGP eVPN are similar to those that define the L2POP mechanism it seemed like a natural evolution to extend this mechanism to support other subprojects.

Our extensions to the networking-bgpvpn and neutron-dynamic-routing subprojects to support BGP eVPN enable the system to learn external MAC/IP information from remote systems and to publish them internally via the L2POP mechamism. Similarly, internal MAC/IP information (i.e., neutron port information) is published to remote systems via a subscription to L2POP RPC notifications received by the BGP agent.

As mentioned above, another usecase for this extension is to be able to publish floating IP information without needing to add too much code directly to the L2POP mechanism driver by leaving the logic in the l3 code/domain as an extension. Publishing FIP information is useful for both the BGP eVPN usecase as well as the normal neutron networking usecases.

The performance concerns discussed above are not as critical as the functional requirements, but still, requiring the virtual switch to do flooding via Head End Replication (HER) is expensive. If there are 2 or 3 compute nodes then it is less of a problem but when the system is scaled up to 10, 50, or more nodes replicating a packet and sending it for each node is not desirable and dramatically reduces the available throughput of the virtual switch and increases the latency of packets that are queued behind packets that require flooding via HER.

Revision history for this message
Slawek Kaplonski (slaweq) wrote :

This was discussed on drivers meeting on 9.11. YAMAMOTO Takashi will ask networking-odl guys about their feedback on it too.
We want to know if this should be provided with some abstraction which will allow to use it also in agent-less implementations, like ODL or maybe it's not necessary.

Also ChenjieXu will provide some examples of l2pop rpc payloads with this extension to this RFE and we will get back to it on next drivers meeting

Revision history for this message
ChenjieXu (midone) wrote :

The fdb format for l2 population mechanism:
{network_id:
          {'segment_id': segment['segmentation_id'],
           'network_type': segment['network_type'],
           'ports': {agent_ip_1: [(mac_1, ip_1),(mac_2, ip_2)],
                     agent_ip_2: [(mac_3, ip_3),(mac_4, ip_4)]}}}
In neutron, network_type: vxlan, gre, geneve.
            segmentation_id: for vxlan network is VNI, for gre network is gre key, for geneve network is genene key.
            network_type together with segmentation_id are used to look up the tun port on tunnel bridge
            agent_ip is used to install flood to tunnel bridge

To extend l2pop fdb, the information inserted by other subprojects should follow the format of l2pop fdb.
For other subprojects(such as networking-bgpvpn),
            network_type and segmentation_id should be decided by subproject
            gateway ip can used as agent_ip to install flood to tunnel bridge

Revision history for this message
ChenjieXu (midone) wrote :
Download full text (4.5 KiB)

Example:

L2pop driver on server side:
===============================
1) when port binding or status or ip_address(or mac) is updated, notify this port's FDB(i.e port's ip address, mac, hosting agent's tunnel ip) to all agents
2) also checks if it is the first or last port on the hosting agent's network, if so
   a) notifies all agents to create/delete tunnels and flood flows to hosting agent
   b) notifies hosting agent about all the existing network ports on other agents(so that hosting agent can
      create tunnels, flood and unicast flows to these ports).

For subproject networking-bgpvpn to extend l2pop fdbs:
===============================
1) when port binding or status or ip_address(or mac) is updated, notify this port's FDB(i.e port's ip
   address, mac, hosting agent's tunnel ip) to all agents
   No action
2) also checks if it is the first or last port on the hosting agent's network, if so
   a) notifies all agents to create/delete tunnels and flood flows to hosting agent
      No action
   b) notifies hosting agent about all the existing network ports on other agents(so that hosting agent can
      create tunnels, flood and unicast flows to these ports).
      Insert the ports information from BGPVPN to the FDB
      The network_id, segment_id and network_type are all from neutron. Because the network_id, segment_id
      and network_type relate to the port which triggers l2pop.
         network_id: comes from the port's network
         segment_id: comes from the segment which the port bounds to.
         network_type: comes from the port's network
      Thus networking-bgpvpn only needs to insert information into 'ports' as following:
        {75b21e39-15b9-4f12-8a60-1375a4dbdbef:
          {'segment_id': '12',
           'network_type': 'vxlan',
           'ports': {'12.66.1.1': [('fa:16:3e:d2:70:92', '168.177.1.23'),('fa:16:3e:f3:a4:21',
                                    '168.177.1.5')],
                     '13.96.1.1': [('fa:16:3e:f9:cc:77', '169.177.1.16'),( 'fa:16:3e:a5:ee:f1',
                                    '169.177.1.12')]}}}
3) Update the BGPVPN gateway
   a) add a new gateway
      Use the L2populationAgentNotifyAPI in neutron\plugins\ml2\drivers\l2pop\rpc.py. For simplicity, we
      call L2populationAgentNotifyAPI as l2pop_notifier.
      l2pop_notifier.add_fdb_entries()
         network: the network associated with the BGPVPN
         network_id: network’s id
         segment_id: network’s “provider:segmentation_id”.
         network_type: network’s “provider:network_type”
     Thus networking-bgpvpn needs to call add_fdb_entries() to add the following fdb:
        {75b21e39-15b9-4f12-8a60-1375a4dbdbef:
          {'segment_id': '100',
           'network_type': 'vxlan',
           'ports': {'12.66.1.1': [('00:00:00:00:00:00', '0.0.0.0')]}}}
   b) withdraw an existing gateway
      l2pop_notifier.remove_fdb_entries()
         network: the network associated with the BGPVPN
         network_id: network’s id
         segment_id: network’s “provider:segmentation_id”.
         network_type: network’s “provider:network_type”
     Thus networking-bgpvpn needs to call add remove_fdb_entries() to remove the fo...

Read more...

Revision history for this message
ChenjieXu (midone) wrote :

"[RFE] Add l2pop support for floating IP resources" has been proposed. The link is below:
https://bugs.launchpad.net/neutron/+bug/1803494

As mentioned above, "[RFE] Add l2pop support for floating IP resources" can be another use case for "[RFE] Enable other subprojects to extend l2pop fdb information".

Revision history for this message
YAMAMOTO Takashi (yamamoto) wrote :

Chenjie,

thank you for examples.

in your example, is 12.66.1.1 a gateway ip?
what's 13.96.1.1?

why is segment_id different for 2) and 3)? (12 vs 100)

do you mean that network_type and segment_id are not used by bgpvpn?

i still don't understand this comment. http://eavesdrop.openstack.org/meetings/neutron_drivers/2018/neutron_drivers.2018-11-09-14.01.log.html#l-59
bgpvpn network is a completely separate concept from neutron network?
i thought they could be mixed in a "ports" dict.

Revision history for this message
ChenjieXu (midone) wrote :
Download full text (5.2 KiB)

Hi YAMAMOTO,

thank you for your comments! My replay is below:

=============================================================
in your example, is 12.66.1.1 a gateway ip?
Yes, 12.66.1.1 is a gateway IP.

=============================================================
what's 13.96.1.1?
13.96.1.1 is another gateway IP.

=============================================================
why is segment_id different for 2) and 3)? (12 vs 100)
This is a good question. I pick different value for 2) and 3). Because:
   In 2), segment_id comes from the segment which the port bounds to. (That port triggers l2pop)
   In 3), segment_id is network’s “provider:segmentation_id”. (The network associates to bgpvpn)

===========================================================================================
do you mean that network_type and segment_id are not used by bgpvpn?
Yes, bgpvpn doesn't use network_type and segment_id. Thus we will use the network's “provider:network_type”
and “provider:segmentation_id”. You can check the db model for bgpvpn by following link:
https://github.com/openstack/networking-bgpvpn/blob/master/networking_bgpvpn/neutron/db/bgpvpn_db.py#L166

===========================================================================================
i still don't understand this comment. http://eavesdrop.openstack.org/meetings/neutron_drivers/2018/neutron_drivers.2018-11-09-14.01.log.html#l-59
bgpvpn network is a completely separate concept from neutron network?
Yes, bgpvpn network is a separate concept. I will explain this in the following question.

===========================================================================================
i thought they could be mixed in a "ports" dict.
To extend l2pop, networking-bgpvpn needs to follow the format of FDB. But just following the format of FDB is not enough, we need to ensure the (ip, mac) pairs can be installed on the tunnel bridge in Neutron.

OVS agent will use network_id to get lvm(LocalVLANMapping):
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L547
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/l2pop/rpc_manager/l2population_rpc.py#L226
After getting lvm, OVS agent uses function _tunnel_port_lookup(self, network_type, remote_ip) to look up the port on tunnel bridge. If the port doesn't exist, it will create a new port.
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L552
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/l2pop/rpc_manager/l2population_rpc.py#L247
segment_id is used as segmentation_id to install flows. Segmentation_id: for vxlan network is VNI, for gre network is gre key, for geneve network is genene key
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L575
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L581
So we can see that network_id, network_type and segement_id are used to install (ip, mac) pairs to flows in OVS agent.

Linuxbridge agent will use ...

Read more...

Revision history for this message
ChenjieXu (midone) wrote :

Sorry, the previous comment was sent by mistake. Please read the previous comment and this comment!

For question "i thought they could be mixed in a "ports" dict".
If project networking-bgpvpn follows the format of FDB and picks correct values for network_id, network_type and segment_id, then the fdbs generated by networking-bgpvpn and Neutron can be mixed but not in "ports". For example:
{75b21e39-15b9-4f12-8a60-1375a4dbdbef:
          {'segment_id': '12',
           'network_type': 'vxlan',
           'ports': {'12.66.1.1': [('fa:16:3e:d2:70:92', '168.177.1.23'),('fa:16:3e:f3:a4:21',
                                    '168.177.1.5')],
                     '13.96.1.1': [('fa:16:3e:f9:cc:77', '169.177.1.16'),( 'fa:16:3e:a5:ee:f1',
                                    '169.177.1.12')]}}
          {'segment_id': '100',
           'network_type': 'vxlan',
           'ports': {'12.66.1.1': [('00:00:00:00:00:00', '0.0.0.0')]}}}

For question "i still don't understand this comment. http://eavesdrop.openstack.org/meetings/neutron_drivers/2018/neutron_drivers.2018-11-09-14.01.log.html#l-59".
OVS agent will use agent_ip as remote_ip to setup tunnel port:
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L1509
When l2pop mechanism driver generates fdb, it will add flooding entry ('00:00:00:00:00:00', '0.0.0.0') to the fdb like following:
{75b21e39-15b9-4f12-8a60-1375a4dbdbef:
          {'segment_id': '100',
           'network_type': 'vxlan',
           'ports': {'12.66.1.1': [('00:00:00:00:00:00', '0.0.0.0')]}}}
If OVS agent find flooding entry, it can add the tunnel port to flood:
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L575

If linuxbridge agent find flooding entry, it will add the agent_ip to FDB for unicast_flooding:
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py#L773
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py#L776

So we can see that agent_ip is used for flooding. In project networking_bgpvpn, gateway_ip can also be used to flood on the condition that flooding entry has been inserted.

Revision history for this message
Isaku Yamahata (yamahata) wrote :

Regarding to networking-ODL, ODL netvirt doens't have notion of L2pop. so it doesn't make sense to use L2pop.
On the other hand, ODL has EVPN support and its own support of remote (mac, ip) pairs so that remote arp broadcast can be reduced.
(local arp responder is also supported, off course.)

The proposal seems very specific to implementation and lacks right abstraction.
The solution would have sort of, notion of remote port(or whatever) so that neutron can know remote (mac, ip) pairs.
L2pop is only a way to realize/implement it as backend as pointed as above.

Revision history for this message
ChenjieXu (midone) wrote :

I sent an email to WindRiver to ask the plan on upstreaming the changes in stx-networking-bgpvpn to networking-bgpvpn. The replay is below:

The plan is to upstream this to the respective projects (networking-bgpvpn and neutron-dyanmic-routing). However, in the short term this is not being prioritized nor has any attempt been made to approach the individual project teams about getting this accepted.

Revision history for this message
Miguel Lavalle (minsel) wrote :

I also asked Thomas Morin, who leads the networking-bgpvpn project to give us input on this RFE

Revision history for this message
Miguel Lavalle (minsel) wrote :

Moving it to triaged state for the time being so we can discuss it further when we have more input

tags: added: rfe-confirmed
removed: rfe-triaged
Revision history for this message
Miguel Lavalle (minsel) wrote :

I meant it to the rfe-confirmed stage

Revision history for this message
ChenjieXu (midone) wrote :

ALL,

This RFE is specific to the Neutron BGP-EVPN use-case which is currently not being pursued for upstreaming from stx-networking-bgpvpn. Therefore, until this feature is required, and an attempt is made to have it accepted by the OpenStack community, this feature cannot be used as a justification for this RFE. This RFE is being abandoned and can be revived if the BGP-EVPN feature becomes a priority. Thank you so much for reviewing this RFE!

Revision history for this message
Miguel Lavalle (minsel) wrote :

Thanks for the update!

tags: added: rfe-postponed
removed: rfe-confirmed
Changed in neutron:
status: In Progress → Won't Fix
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on neutron (master)

Change abandoned by ChenjieXu (<email address hidden>) on branch: master
Review: https://review.openstack.org/599319
Reason: As Miguel said, the RFE is being abandoned and can be revived if the BGP-EVPN feature becomes a priority.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.