[RFE] extend l2gw api to support multi-site connectivity

Bug #1529863 reported by Irena Berezovsky
20
This bug affects 3 people
Affects Status Importance Assigned to Milestone
networking-l2gw
New
Undecided
Ofer Ben-Yacov

Bug Description

Current l2gw project provides API and reference implementation to connect overlay network with bare-metal servers. Another use case we would like to propose is to connect overlay network(s) across sites. Current l2gw API is not sufficient to enable this use case.
We would like to extend the API with additional capabilities to specify Remote GW and to enable l2gw to connect to the remote gateways. In order to support use case when learning across sites is not possible, we will need API to allow user to specify MAC addresses that can be reached via remote gateway.

Tags: rfe
tags: added: rfe
Revision history for this message
Ofer Ben-Yacov (ofer-benyacov) wrote :
Revision history for this message
Chaoyi Huang (joehuang) wrote :

Good post. Some comment for the document:

1. For the connection l2-remote-gateway-connection, not only provide with segmentation-id, but also the network type, like VxLAN, GRE. VxLAN can be implemented first.

2. A batch mac creation command will be very useful, except the one mac creation command.

Revision history for this message
Chaoyi Huang (joehuang) wrote :

Requirement from OPNFV multisite also described here to to support this RFE:
    https://bugs.launchpad.net/neutron/+bug/1484005

    Most of telecom applications have already been designed as
    Active-Standby/Active-Active/N-Way to achieve high availability
    (99.999%, corresponds to 5.26 minutes of unplanned downtime in a year),
    typically state replication or heart beat between
    Active-Active/Active-Active/N-Way (directly or via replicated database
    services, or via private designed message format) are required.

    We have to accept the currently limited availability ( 99.99%) of a
    given OpenStack instance, and intend to provide the availability of the
    telecom application by spreading its function across multiple OpenStack
    instances.To help with this, many people appear willing to provide multiple
    “independent” OpenStack instances in a single geographic site or different sites
    , with special networking (L2/L3) connectivity between clouds.

    The telecom application often has different networking plane for different
    purpose:

    1) external network plane: using for communication with other telecom application.

    2) components inter-communication plane: one VNF often consisted of several
       components, this plane is designed for components inter-communication with
       each other

    3) backup plane: this plane is used for the heart beat or state replication
       between the component's active/standby or active/active or N-way cluster.

    4) management plane: this plane is mainly for the management purpose

    Generally these planes are separated with each other. And for legacy telecom
    application, each internal plane will have its fixed or flexible IP addressing
    plane. There are some interesting/hard requirements on the networking (L2/L3)
    between OpenStack instances, at least the backup plane across different OpenStack
    instances:

    Overlay L2 networking as the backup plane for heartbeat or state replication.
    Overlay L2 network is preferred, the reason is:

       a) Support legacy compatibility: Some telecom app with built-in internal L2
          network, for easy to move these app to virtualized telecom application, it
          would be better to provide L2 network.

       b) Support IP overlapping: multiple telecom applications may have overlapping IP
          address for cross OpenStack instance networking Therefore, over L2 networking
          across Neutron feature is required in OpenStack.

Revision history for this message
Chaoyi Huang (joehuang) wrote :

In the document, "API to allow user to specify MAC addresses that can be reached via remote gateway.", not only MAC, but also IP of the VM should also added to the L2GW for ARP response purpose.

Changed in networking-l2gw:
assignee: nobody → Ofer Ben-Yacov (ofer-benyacov)
Revision history for this message
Shinobu KINJO (shinobu) wrote :

As Chaoyi mentioned, it's more reasonable to make use of VXLAN functionalities.

Key words:
 - multicast group
 - unicast group
 - VNID

Here is scenario:

Some system, let's say system-A sends out an ARP request for some IP on its L2 VXLAN network.

VTEP1 receives this ARP request. If it does not have a mapping for some IP, VTEP1 encapsulates this ARP request in an ip multicast packet and forwards it to the VXLAN multicast group.

This multicast packet has ip address of VTEP1 as the source ip and the VXLAN multicast group address as destination ip address.

This multicast packet is distributed to all members in the tree. VTEP2 and VTEP3 receive the encapsulated multicast packet because they joined the VXLAN multicast group.

And this packet is de-encapsulated to check its VNID in the VXLAN header. If it matches their configured VXLAN segment VNID, they forward the ARP request to their local VXLAN network.

They also learn the ip address of VTEP1 from the outer ip address header and inspect the packet to lean the mac address of system-A, and placing this mapping in the local table.

Targeted system, let's say system-B receives the ARP request forwarded by VTEP2 then responds with its own mac address, and learns ip, mac mapping.

VTEP2 receives the ARP reply of system-B with ip, mac mapping. It can use the unicast tunnel to forward the ARP reply back to VTEP1.

Now in the encapsulated unicast packet, the source ip address is VTEP2's and the destination ip address is VTEP1's.

VTEP1 receives the encapsulated ARP reply from VTEP2, then de-encapsulates and forwards the ARP reply to system-A. And it learns ip address of VTEP2.

Any comments would be appreciated.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.