[RFE] Method to guarantee that at least one mechanism driver implements security groups

Bug #1518087 reported by yalei wang
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
neutron
Won't Fix
Wishlist
yalei wang

Bug Description

Now we can't get info that MD support SGs or not, and no way to guarantee that at least one MD will be responsible to handle the security group event.

And probably other extension like QOS, have the similar problem.

more info could be find here
https://review.openstack.org/#/c/240356/

Tags: rfe
yalei wang (yalei-wang)
summary: - add the precommit primitive for security groups
+ [RFE] add the precommit primitive for security groups
description: updated
Gary Kotton (garyk)
Changed in neutron:
importance: Undecided → Wishlist
Revision history for this message
Henry Gessau (gessau) wrote : Re: [RFE] add the precommit primitive for security groups

Should the title be "Method to guarantee that at least one mechanism driver implements security groups"?

Revision history for this message
Armando Migliaccio (armando-migliaccio) wrote :

I think this is two problems in one, and you should consider splitting the two. As I mentioned, time and time again, the lack of guarantee of execution of certain actions does not apply to secgroup only, but to networks too.

As for the former aspect, if you wanted to add an BEFORE_COMMIT to the callback system so that you can atomically execute callbacks within the same transaction, I'd argue that don't need a spec for it.

Changed in neutron:
status: New → Triaged
Revision history for this message
yalei wang (yalei-wang) wrote :

Thanks all, I will update the BP to reflect it @Gessau
and @armax, I will file a patch to PRE_COMMIT issue. spec will focus on guarantee of execution of mech driver called within network/secgroup

yalei wang (yalei-wang)
summary: - [RFE] add the precommit primitive for security groups
+ [RFE] security group event need be enforced by at least one mech driver
description: updated
yalei wang (yalei-wang)
summary: - [RFE] security group event need be enforced by at least one mech driver
+ [RFE] Method to guarantee that at least one mechanism driver implements
+ security groups
description: updated
Changed in neutron:
assignee: nobody → yalei wang (yalei-wang)
Revision history for this message
Armando Migliaccio (armando-migliaccio) wrote :

Based on discussion [1], we ruled out against of this. Can be revised if justification for rejection is disputed based on other grounds.

[1] http://eavesdrop.openstack.org/meetings/neutron_drivers/2015/neutron_drivers.2015-12-08-15.03.log.html

Changed in neutron:
status: Triaged → Won't Fix
Revision history for this message
Robert Kukura (rkukura) wrote :

I just looked at the drivers meeting log, and feel this decision has been made based on an incomplete understanding of ML2. I think its useful to consider 3 different use cases for ML2 deployment:

1) The simplest use case involves just one mechanism driver, as with the typical OVS or LB reference implementations. In this case, the single configured MD is responsible for enforcing/providing SGs, QoS, and any other extensions that apply. If the single MD can't enforce some extension, it can be looked at as a mis-configuration that never should have occurred.

2) A slightly more complex use case adds another MD for a ToR switch. Here, the combination of two (or possibly more) MDs is responsible for enforcing/providing all needed extensions. The ToR MD might simply enable VLANs for switch ports, or might dynamically allocate VLANs (or some other type of segments) using HPB. But typically extensions would be enforced/provided by the host-level MD. Again, if the (uniform) combination of MDs can't provide the right semantics, its a mis-configuration.

3) Where it gets interesting is when the deployment is no longer uniform, and different MDs or combinations of MDs might be involved in binding different ports. This could be a multi-vendor deployment with different types of ToR switches. Or it might involve a mix of different types of compute nodes (different hypervisors, vSwitches, baremetal, containers, ...). In these cases, how or whether each extension can be enforced/provided might vary drastically among different possible port bindings. For example, normal nova nodes with OVS or LB would be able to enforce SGs using iptables, while baremetal nodes in the same deployment might not be able to enforce SGs at all. Or maybe a ToR switch to which a baremetal node is connected is able to enforce the SGs (or at least a subset of SG rules).

I can see how an SG-specific (or more general) mechanism for ensuring that at least one MD is responsible for enforcing/providing each applicable extension in each port binding seems like overkill for cases 1 and 2. But to properly support case 3, something like this seems essential.

Its not simply a matter of returning an error to the user when the applicable extensions for a port cannot be enforced/provided, but also of letting ML2's port binding reject the inappropriate potential bindings (combinations of MDs) and move on to find another binding that will work.

I can understand if this needs more discussion before deciding to implement it (especially a general mechanism) in mitaka, but I don't think it should be summarily dismissed by setting to "won't fix".

Revision history for this message
Armando Migliaccio (armando-migliaccio) wrote :

As I mentioned on comment #4, I am happy to revise the decision, but a decision will have to be made at some point.

I'll go over your detailed explanation as why you think this is justified, but from a quick run through, I don't see why pre-deployment validation wouldn't suffice to address any use case you mentioned here (from the least to the most complex). In the end, each MD should know where and how to enforce security groups, and I don't see how the rejection of this proposal prevents that from happening. On the other end, I personally wouldn't want to promote tight coupling between MD's, because that defeats the whole point of modularity.

You said, and I quote you:

"letting ML2's port binding reject the inappropriate potential bindings (combinations of MDs) and move on to find another binding that will work"

This may be susceptible to race conditions, and different configuration ordering can lead to different binding results leading to something that may be difficult to troubleshoot. MD should independently know what they are supposed to do security-wise (amongst other things), and so long as there's a fixed contract (provided by the Neutron security group's API), I am unable to grasp how this proposal address any of your concerns.

I may not have a complete understanding of ML2, and it's fair to say that you are the most knowledgeable person of ML2 on the planet. That said, making it more complicated definitely doesn't help my job in understanding/maintaining it any easier, and since you no longer involved as you're used to, I find very hard to make an executive decision that promotes further complexity.

Revision history for this message
Robert Kukura (rkukura) wrote :

Armando certainly makes some good points in comment 6 above, but I still believe their are important use cases that are not adequately addressed solely by pre-deployment validation. These involve ML2 actively rejecting potential port bindings that would otherwise be allowed, when that the semantics implied by the extension attribute values would not be provided by that binding, while each individual MD might otherwise participate in completely valid bindings. I acknowledge its a bit late to try to solve this properly in mitaka, so I thing the best way forward is for myself and others interested to prepare a concrete proposal for the N release.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.