[RFE] Support for multiple L2 agents on a host

Bug #1544676 reported by Jason Niesz
10
This bug affects 1 person
Affects Status Importance Assigned to Milestone
neutron
Won't Fix
Wishlist
Unassigned

Bug Description

[Description]
Currently it is not possible to run multiple L2 agents on a host without them conflicting with each other. For example, if I set up and configure Linux bridge and Open vSwitch agents on a compute host both agents will try to enslave the tap interface of the instance resulting in a conflict.

[Proposed Change]
Add a mechanism to associate a network to an L2 agent type. When a new network is provisioned it would get associated with an L2 agent type. When a new instance is launched on that network only the appropriate L2 agent would get called to enslave the tap interface of the instance.

[Reason for Change]
Having the capability to run multiple L2 agents on a host and associating networks to an agent type would allow for in place migrations between different networking scenarios. An example of this would be migrating from provider networking with Linux bridge to DVR with OVS. With this new capability, I could configure and spin up OVS agents across all my compute hosts and provision new networks associated with the OVS agent type. I could then migrate instances over from provider networking to DVR with OVS. The current option for migration forces me to have a separate set of dedicated compute hosts for migrating between different networking scenarios.

There could also be other reasons to support multiple L2 agents, such as performance or functionality. It might make sense to back one network with OVS and another with Linux bridge. While I gave Linux bridge and OVS as examples, the number of L2 agents could also include possible future options, such as Cisco Vector Packet Processing (VPP).

Tags: rfe
Changed in neutron:
importance: Undecided → Wishlist
Revision history for this message
Armando Migliaccio (armando-migliaccio) wrote :

This can get pretty messy. I would be skeptical if we allowed this type of use case.

Changed in neutron:
status: New → Confirmed
Revision history for this message
Armando Migliaccio (armando-migliaccio) wrote :

No response since last solicitation. Let's gather the temperature for this type of use case in the drivers meeting.

Changed in neutron:
status: Confirmed → Triaged
Revision history for this message
Armando Migliaccio (armando-migliaccio) wrote :
Revision history for this message
Armando Migliaccio (armando-migliaccio) wrote :

There are multiple migration scenarios that can be considered, you mentioned a few, but this is still too broad to really grasp the complexity involved to deliver a solution. Cold migration is usually the one that guarantees the most reliable outcome and it's easier to manage/implement. I'd be personally wary of going down such a path, which I could envision being challenging to develop, test, maintain and support. It's surely a nice feat of engineering, but with more issues related to rolling upgrades etc. I'd rather focus on higher priority items. Let's gather more feedback. In the meantime, if you can provide more details on what you are actually trying to achieve, then we can perhaps narrow down what's required.

Changed in neutron:
assignee: nobody → Narender (narender-soorineeda)
Revision history for this message
Armando Migliaccio (armando-migliaccio) wrote :
Changed in neutron:
assignee: Narender (narender-soorineeda) → nobody
status: Triaged → Won't Fix
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.