Document "multi-provider-networks" extension

Bug #1242019 reported by Salvatore Orlando
This bug report is a duplicate of:  Bug #1226279: document multiprovidernet extension. Edit Remove
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
openstack-api-site
Confirmed
High
Unassigned

Bug Description

This extension is now supported by various plugins, including the ML2 - and should therefore be documented

The multi provider extension is similar to the provider extension, as the name suggests.
From an API perspective it allows users with admin credentials to specify multiple "physical" bindings for a give neutron network.

Such bindings are specified using the 'segments' attribute, which is a list. Each element of this list has the same structure as the 'provider network attributes'.
These attributes are:
- provider:network_type
- provider:physical_network
- provider:segmentation_id

For these attributes validation rules are identical to the provider networks extension. Obviously both extensions cannot be used at the same time.
So far the multi provider extension is supported by the NSX and ML2 plugins.

For the ML2 plugin it allows for instance to specify things such:
- multiple VLANs for a given network
- a VXLAN tunnel id and a VLAN
Thus allowing to define "multi-segments" networks.

However, it might be important to document how then the various ML2 mechanism drivers leverage this information at the data plane layer, if this information is deemed important for the API documentation.

Tags: netconn-api
Changed in openstack-api-site:
milestone: none → havana
tags: added: netconn-api
Aaron Rosen (arosen)
Changed in openstack-api-site:
assignee: nobody → Aaron Rosen (arosen)
Revision history for this message
Diane Fleming (diane-fleming) wrote :

backport: havana

Changed in openstack-api-site:
milestone: havana → icehouse
Revision history for this message
Tom Fifield (fifieldt) wrote :

Hi Aaron, are you still working on this?

Changed in openstack-api-site:
assignee: Aaron Rosen (arosen) → Diane Fleming (diane-fleming)
description: updated
description: updated
Changed in openstack-api-site:
assignee: Diane Fleming (diane-fleming) → nobody
Revision history for this message
Robert Kukura (rkukura) wrote :

Here's a quick description of ML2 port binding, including how multi-segment networks are handled:

    Port binding is how the ML2 plugin determines the mechanism driver that handles the port, the network segment to which the port is attached, and the values of the binding:vif_type and binding:vif_details port attributes. Its inputs are the binding:host_id and binding:profile port attributes, as well as the segments of the port's network. When port binding is triggered, each registered mechanism driver’s bind_port() function is called, in the order specified in the mechanism_drivers config variable, until one succeeds in binding, or all have been tried. If none succeed, the binding:vif_type attribute is set to 'binding_failed'. In bind_port(), each mechanism driver checks if it can bind the port on the binding:host_id host, using any of the network’s segments, honoring any requirements it understands in binding:profile. If it can bind the port, the mechanism driver calls PortContext.set_binding() from within bind_port(), passing the chosen segment's ID, the values for binding:vif_type and binding:vif_details, and optionally, the port’s status. A common base class for mechanism drivers supporting L2 agents implements bind_port() by iterating over the segments and calling a try_to_bind_segment_for_agent() function that decides whether the port can be bound based on the agents_db info periodically reported via RPC by that specific L2 agent. For network segment types of 'flat' and 'vlan', the try_to_bind_segment_for_agent() function checks whether the L2 agent on the host has a mapping from the segment's physical_network value to a bridge or interface. For tunnel network segment types, try_to_bind_segment_for_agent() checks whether the L2 agent has that tunnel type enabled.

Note that, although ML2 can manage binding to multi-segment networks, neutron does not manage bridging between the segments of a multi-segment network. This is assumed to be done administratively.

Finally, at least in ML2, the providernet and multiprovidernet extensions are two different APIs to supply/view the same underlying information. The older providernet extension can only deal with single-segment networks, but is easier to use. The newer multiprovidernet extension handles multi-segment networks and potentially supports an extensible set of a segment properties, but is more cumbersome to use, at least from the CLI. Either extension can be used to create single-segment networks with ML2. Currently, ML2 network operations return only the providernet attributes (provider:network_type, provider:physical_network, and provider:segmentation_id) for single-segment networks, and only the multiprovidernet attribute (segments) for multi-segment networks. It could be argued that all attributes should be returned from all operations, with a provider:network_type value of 'multi-segment' returned when the network has multiple segments. A blueprint in the works for juno that lets each ML2 type driver define whatever segment properties make sense for that type may lead to eventual deprecation of the providernet extension.

Revision history for this message
Diane Fleming (diane-fleming) wrote :

Hi Diane,

Basically the multi provider extension is similar to the provider extension, as the name suggests.
From an API perspective it allows users with admin credentials to specify multiple "physical" bindings for a give neutron network.

Such bindings are specified using the 'segments' attribute, which is a list. Each element of this list has the same structure as the 'provider network attributes'.
These attributes are:
- provider:network_type
- provider:physical_network
- provider:segmentation_id

For these attributes validation rules are identical to the provider networks extension. Obviously both extensions cannot be used at the same time.
So far the multi provider extension is supported by the NSX and ML2 plugins.
I am not going to bore you with NSX, as I think it's beyond the scope of your task ;)
For the ML2 plugin it allows for instance to specify things such:
- multiple VLANs for a given network
- a VXLAN tunnel id and a VLAN
Thus allowing to define "multi-segments" networks.

However, I still do not fully understand how then the various ML2 mechanism drivers leverage this information at the data plane layer. For instance this could be used for providing an abstraction of a single network which instead actually spans different transport domains. If this information is important for the documentation you're going to write I suggest getting in touch with Kyle Mestery (mestery) or Bob Kukura (rkukura), which co-lead the ML2 team.

Regards,
Salvatore

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.