Need to provide a formal API for Nova to use to plug/unplug VIFs

Bug #1046766 reported by Daniel Berrange on 2012-09-06
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Gary Kotton

Bug Description

The current interaction model between Nova and Quantum is critically flawed in a number of respects

 - There is a hard configuration dependency between Quantum & Nova. ie When changing the Quantum network driver implementation, the admin must also update the Nova libvirt_vif_driver config parameter to match.

 - Some quantum drivers are providing custom plugin impls for the Nova libvirt VIF driver classes. This exposes Quantum to the internal implementation details of the libvirt driver in Nova. These impl details ought to be private to Nova since they can be changed arbitrarily at any time which will break Quantum plugins (eg see bug 1046758).

 - Some Quantum drivers are tied to usage with Nova and libvirt. This is more or less the same point as above - since the drivers need to provide custom libvirt VIF drivers to work with Nova, this is tieing Quantum into Nova + libvirt, preventing its re-use with other non-libvirt drivers or applications like oVirt.

 - Some Quantum drivers are doing work which belongs under the hypervisors' control. eg the Quantum Linux bridge driver is wanting to create TAP devices itself & add them to the bridge. This is only achievable by using the libvirt type=ethernet VIF config. Not only is this config designated unsupported by RHEL (due to its inherent security limitations), but this can only ever work with KVM. libvirt's LXC and Xen drivers do not use TAP devices for their networking, and want to be in charge of adding their own interface to the bridge.

All these problems could be solved if Quantum exposed a formal API for compute services to call to plug/unplug VIFs, instead of relying on hooking into the libvirt VIF driver internal impls. The API would do any port configuration work that might be neccessary, and then return information about where the VIF should be attached & what parameters it should use. The compute service would then take care of actually deciding the optimal libvirt configuration & let the hypervisor actually create & attach the VIF to the network.

dan wendlandt (danwent) wrote :

The one point I strongly agree with is that vif-drivers should not exist in Quantum, as it makes it impossible to have unit tests that confirm they are not broken. The fact that a vif-driver still exists in the Cisco plugin is an oversight as this is not a configuration used by many in the community.

I also agree that there's room for improvement in terms of removing the hard configuration dependency. The existing vif-plugging design is from Diablo, when Quantum was rarely used and so we were trying to be minimally invasive. At this point, it would be nice if when Nova creates a port, Quantum can tell Nova which vif plugging to use, and any additional parameters.

I think its worth having a discussion on this at the Grizzly summit. I'll add it to my list.

Changed in quantum:
status: New → Confirmed
dan wendlandt (danwent) wrote :

high, but for Grizzy

Changed in quantum:
importance: Undecided → High
dan wendlandt (danwent) wrote :


Ian Wells (ijw-ubuntu) wrote :

If you look in Nova, the new network API has a VIF class. It's a very weak class with unstructured data. It would perhaps make sense for that to be tidied up, and then subclassed to provide, from whatever provider, the necessary information to hypervisors to plug into a VIF (particularly bearing in mind that a VIF is not always just the name of a bridge to join). The current 'I have this class that sits off to the side and does all the actions' mechanism is not great and it's also not terribly OO.

The contract between nova and the network code would be 'if you implement this VIF class then you can provide a network endpoint for nova to use'.

dan wendlandt (danwent) wrote :

btw, I created a summit session to discuss potential improvements for this. Anyone want to volunteer to drive?

Gary Kotton (garyk) wrote :

I'll be happy to give it a bash.

dan wendlandt (danwent) wrote :

great. I updated the comment of the session to say that you will lead it (unfortunately, the system doesn't let me update the owner of a session. I guess thats a feature request to ttx.

Daniel Berrange (berrange) wrote :

@Ian the entire VIF class mechanism is horribly over-engineering the whole problem, and harming maintainability/flexibility of Nova virt driver development. If you look at possible hypervisor configuration options, there are a merely handful of supportable ways to attach a VM's VIF to a host network (traditional bridge, openvswitch bridge, macvtap - 802.Qbh/Qbg). We do not need to implement a multitude of classes to deal with that. All we need is to define a set of standardized metadata that tells a hypervisor how to connect a VIF to a host network, and for each quantum plugin figure out which attachment is applicable. If we define the metadata then there is no need for any VIF class/subclass mechanism, since the virt drivers can just process the metadata directly.

dan wendlandt (danwent) wrote :

@berrange if the only role of vif-plugging was passing config to libvirt, I think that would be true. However, many vif-plugging drivers do much more than that (e.g., executing additional system calls). I agree with most (all?) of your goals, but I want to make sure we have the problem-space properly defined.

Daniel Berrange (berrange) wrote :

Yep, IMHO the existence of these various non-config related setup tasks in the VIF drivers is one of the critical problems with the current design. The VIF driver impl is specific to libvirt, but those non-config setup tasks need to be done by any Nova virt driver that wishes to integrate with Quantum. The tasks also need to be done if any non-Nova projects (like oVirt) want to use Quantum for network management. In essence these VIF driver setup tasks are Quantum code that has been punted to Nova, which introduces a mutual dependency between Quantum & Nova. The only way to avoid duplication of code for these setup tasks is to get a strict separation of setup vs configuration information, and then have all the setup tasks be in Quantum code.

My 33,000ft view is that Quantum would need some kind of "plug" and "unplug" APIs that Nova virt drivers (or non-Nova virt code) can invoke. The "plug" API would be given information about what network the VIF is intended to plug into. They would do whatever VIF setup tasks are required, and then return a short config document specifying how the hypervisor should connect the VIF to the network. The virt driver would solely be responsible for translating this config into whatever the hypervisor config needs to look like.

Now I understand that as it stands the Quantum API is somewhat detached from the compute nodes where work may be done by optional agents. So adding this kind of "plug" and "unplug" APIs into the current Quantum API service may not work with its architecture. Thus it might be necessary to have a local-only API exposed by the agent running on the compute node to hold just the "plug" and "unplug" operation implementations. For Quantum drivers which don't already have an agent, this would necessitate having a simple api-only agent on the node, but I think this is worthwhile to get good architectural separation between Quantum and Nova.

Gary Kotton (garyk) on 2012-10-25
Changed in quantum:
assignee: nobody → Gary Kotton (garyk)
milestone: none → grizzly-1

Fix proposed to branch: master

Changed in quantum:
status: Confirmed → In Progress
Gary Kotton (garyk) wrote :

This bug will be treated as part of the BP -

Changed in quantum:
milestone: grizzly-1 → grizzly-2
dan wendlandt (danwent) wrote :

since we're tracking this as a BP, invalidating the bug.

Changed in quantum:
status: In Progress → Invalid
Gary Kotton (garyk) on 2012-12-13
Changed in quantum:
milestone: grizzly-2 → none
To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers