Activity log for bug #1586056

Date Who What changed Old value New value Message
2016-05-26 13:59:52 Slawek Kaplonski bug added bug
2016-05-26 14:00:01 Slawek Kaplonski neutron: assignee Slawek Kaplonski (slaweq)
2016-05-26 14:07:44 Miguel Angel Ajo summary Improve mechanism of validation if QoS rule is supported by mechasnim driver Improved validation mechanism for QoS rules with mechanism drivers
2016-05-26 14:35:23 Miguel Angel Ajo description Currently mechanism drivers in ML2 plugin can specify list of QoS rule types which are supported by driver. For example for openvswitch mech_driver it is "bandwidth_limit_rule" (which is only for egress traffic in fact) and "dscp_marking" rule. Neutron API can return what rules are supported by get common set of supported rules from each loaded mechanism driver. There are two problems with such simple check: - if rules are created or policy is applied to port there is no validation at all. It means that for example policy with dscp_marking rule can be applied to port bound with linuxbridge or sriov mech_drivers without any errors - there is no possibility that mechanism driver will support some rules only with specific parameters. For example bandwidth_limit_rule could have parameter "direction" with values "ingress" and "egress" and some of mechanism drivers could only supports bandwidth limit with direction=egress. We propose change of how supported rule types are calculated and how validation works. Supported rule types would be set of rules and list of parameters and values for those rules. This set will contain rules supported by at least one of loaded mechanism drivers. Example output from neutron could be: admin@devstack-1:/opt/stack/neutron$ neutron qos-available-rule-types +---------------------------------------------------------------------------------+-----------------+ | parameters | type | +---------------------------------------------------------------------------------+-----------------+ | {u'dscp_mark': []} | dscp_marking | | {u'max_kbps': [], u'direction': [u'ingress', u'egress'], u'max_burst_kbps': []} | bandwidth_limit | +---------------------------------------------------------------------------------+-----------------+ which means that: * dscp_marking rule is supported with parameter dscp_mark and it can have any value * bandwidth_limit rule is supported with parameters max_kbps and max_burst_kbps which can have any value and direction which can be ingress or egress only Second change is change in validation of rules. During events like: * port_update * network_update * policy_update * rule_create * rule_update there will be called method from neutron.services.qos.notification_drivers.qos_base:QosServiceNotificationDriverBase.validate_policy_for_{port,network} which will raise exception if policy can't be applied on port or one of ports. For ML2 plugin it will raise exception if policy will contain any rule which is not supported by mechanism driver used for port binding or by mechanism drivers used by specific binding:vnic_type if port is unbound. Currently neutron API can return what rules are supported by returning the minimum subset of rules supported by all mechanism drivers. Mechanism drivers in ML2 plugin can specify a list of QoS rule types which are supported. For example openvswitch mech_driver will report "bandwidth_limit_rule" (which is only for egress traffic in fact) and "dscp_marking" rule. There are two problems with such simple check:  - if rules are created or policy is applied to port there is no validation at all. It means that for example policy with dscp_marking rule can be applied to port bound with linuxbridge or sriov mech_drivers without any errors.  - there is no possibility that mechanism driver will support some rules only with specific parameters. For example bandwidth_limit_rule could have parameter "direction" with values "ingress" and "egress" and some of mechanism drivers could only supports bandwidth limit with direction=egress. We propose changes to how supported rule types are calculated and how validation works. Incompatible configurations or settings should be spotted as soon as possible, and stopped at API level with a clear explanation of what's going on. On corner cases where this can't be determined until bind time, binding should fail so the administrator won't have the wrong perception of the all the policy rules being applied, where it's not the case. Reporting of available rules should also be enhanced, so the tenant & admin will get a full picture of what's supported by port type (binding:vnic_type where the portbinding extension is available) $ neutron qos-rule-support-per-port-type +-----------------------------------------------------------------------+ | port_type | rules_and_parameters | +-----------+-----------------------------------------------------------+ | normal | {'dscp_marking': {'dscp_mark': []}, | | | 'bandwidth_limit' : {'max_kbps': [], | | | 'max_burst_kbps': [], | | | 'direction': ['ingress', 'egress']} | +-----------+-----------------------------------------------------------+ | direct | {'bandwidth_limit' {'max_kbps':[], | | | 'max_burst_kbps': [], | | | 'direction': ['egress'] | +-----------+-----------------------------------------------------------+ In plugins where no vnic_type is available, the only reporte port_type should be 'normal'. which means (normal): * dscp_marking rule is supported with parameter dscp_mark and it can have any value * bandwidth_limit rule is supported with parameters max_kbps and max_burst_kbps which can have any value and direction which can be ingress or egress only and means (direct aka.sr-iov port): * no dscp marking rule support * bandwidth limit supports only egress direction. Second change is change in validation of rules. During events like: * port_update * network_update * policy_update * rule_create * rule_update there will be called method from neutron.services.qos.notification_drivers.qos_base:QosServiceNotificationDriverBase.validate_policy_for_{port,network} which will raise exception if policy can't be applied on port or one of ports. For ML2 plugin it will raise exception if policy will contain any rule which is not supported by mechanism driver used for port binding or by mechanism drivers used by specific binding:vnic_type if port is unbound.
2016-05-26 14:40:48 Miguel Angel Ajo summary Improved validation mechanism for QoS rules with mechanism drivers Improved validation mechanism for QoS rules with port types
2016-05-26 14:41:18 Miguel Angel Ajo description Currently neutron API can return what rules are supported by returning the minimum subset of rules supported by all mechanism drivers. Mechanism drivers in ML2 plugin can specify a list of QoS rule types which are supported. For example openvswitch mech_driver will report "bandwidth_limit_rule" (which is only for egress traffic in fact) and "dscp_marking" rule. There are two problems with such simple check:  - if rules are created or policy is applied to port there is no validation at all. It means that for example policy with dscp_marking rule can be applied to port bound with linuxbridge or sriov mech_drivers without any errors.  - there is no possibility that mechanism driver will support some rules only with specific parameters. For example bandwidth_limit_rule could have parameter "direction" with values "ingress" and "egress" and some of mechanism drivers could only supports bandwidth limit with direction=egress. We propose changes to how supported rule types are calculated and how validation works. Incompatible configurations or settings should be spotted as soon as possible, and stopped at API level with a clear explanation of what's going on. On corner cases where this can't be determined until bind time, binding should fail so the administrator won't have the wrong perception of the all the policy rules being applied, where it's not the case. Reporting of available rules should also be enhanced, so the tenant & admin will get a full picture of what's supported by port type (binding:vnic_type where the portbinding extension is available) $ neutron qos-rule-support-per-port-type +-----------------------------------------------------------------------+ | port_type | rules_and_parameters | +-----------+-----------------------------------------------------------+ | normal | {'dscp_marking': {'dscp_mark': []}, | | | 'bandwidth_limit' : {'max_kbps': [], | | | 'max_burst_kbps': [], | | | 'direction': ['ingress', 'egress']} | +-----------+-----------------------------------------------------------+ | direct | {'bandwidth_limit' {'max_kbps':[], | | | 'max_burst_kbps': [], | | | 'direction': ['egress'] | +-----------+-----------------------------------------------------------+ In plugins where no vnic_type is available, the only reporte port_type should be 'normal'. which means (normal): * dscp_marking rule is supported with parameter dscp_mark and it can have any value * bandwidth_limit rule is supported with parameters max_kbps and max_burst_kbps which can have any value and direction which can be ingress or egress only and means (direct aka.sr-iov port): * no dscp marking rule support * bandwidth limit supports only egress direction. Second change is change in validation of rules. During events like: * port_update * network_update * policy_update * rule_create * rule_update there will be called method from neutron.services.qos.notification_drivers.qos_base:QosServiceNotificationDriverBase.validate_policy_for_{port,network} which will raise exception if policy can't be applied on port or one of ports. For ML2 plugin it will raise exception if policy will contain any rule which is not supported by mechanism driver used for port binding or by mechanism drivers used by specific binding:vnic_type if port is unbound. Currently neutron API can return what rules are supported, in the context of ML2: by returning the minimum subset of rules supported by all mechanism drivers. Mechanism drivers in ML2 plugin can specify a list of QoS rule types which are supported. For example openvswitch mech_driver will report "bandwidth_limit_rule" (which is only for egress traffic in fact) and "dscp_marking" rule. There are two problems with such simple check:  - if rules are created or policy is applied to port there is no validation at all. It means that for example policy with dscp_marking rule can be applied to port bound with linuxbridge or sriov mech_drivers without any errors.  - there is no possibility that mechanism driver will support some rules only with specific parameters. For example bandwidth_limit_rule could have parameter "direction" with values "ingress" and "egress" and some of mechanism drivers could only supports bandwidth limit with direction=egress. We propose changes to how supported rule types are calculated and how validation works. Incompatible configurations or settings should be spotted as soon as possible, and stopped at API level with a clear explanation of what's going on. On corner cases where this can't be determined until bind time, binding should fail so the administrator won't have the wrong perception of the all the policy rules being applied, where it's not the case. Reporting of available rules should also be enhanced, so the tenant & admin will get a full picture of what's supported by port type (binding:vnic_type where the portbinding extension is available) $ neutron qos-rule-support-per-port-type +-----------------------------------------------------------------------+ | port_type | rules_and_parameters | +-----------+-----------------------------------------------------------+ | normal | {'dscp_marking': {'dscp_mark': []}, | | | 'bandwidth_limit' : {'max_kbps': [], | | | 'max_burst_kbps': [], | | | 'direction': ['ingress', 'egress']} | +-----------+-----------------------------------------------------------+ | direct | {'bandwidth_limit' {'max_kbps':[], | | | 'max_burst_kbps': [], | | | 'direction': ['egress'] | +-----------+-----------------------------------------------------------+ In plugins where no vnic_type is available, the only reporte port_type should be 'normal'. which means (normal): * dscp_marking rule is supported with parameter dscp_mark and it can have any value * bandwidth_limit rule is supported with parameters max_kbps and max_burst_kbps which can have any value and direction which can be ingress or egress only and means (direct aka.sr-iov port): * no dscp marking rule support * bandwidth limit supports only egress direction. Second change is change in validation of rules. During events like: * port_update * network_update * policy_update * rule_create * rule_update there will be called method from neutron.services.qos.notification_drivers.qos_base:QosServiceNotificationDriverBase.validate_policy_for_{port,network} which will raise exception if policy can't be applied on port or one of ports. For ML2 plugin it will raise exception if policy will contain any rule which is not supported by mechanism driver used for port binding or by mechanism drivers used by specific binding:vnic_type if port is unbound.
2016-05-26 22:17:06 Carl Baldwin summary Improved validation mechanism for QoS rules with port types [RFE] Improved validation mechanism for QoS rules with port types
2016-05-26 22:17:11 Carl Baldwin neutron: importance Undecided Wishlist
2016-05-26 22:33:24 OpenStack Infra neutron: status New In Progress
2016-05-27 06:20:47 Moshe Levi bug added subscriber Moshe Levi
2016-06-02 16:22:36 Armando Migliaccio neutron: status In Progress Confirmed
2016-06-04 06:14:48 OpenStack Infra neutron: status Confirmed In Progress
2016-06-16 15:14:23 Assaf Muller neutron: status In Progress Confirmed
2016-06-21 20:04:44 OpenStack Infra neutron: status Confirmed In Progress
2016-06-22 10:07:27 Slawek Kaplonski description Currently neutron API can return what rules are supported, in the context of ML2: by returning the minimum subset of rules supported by all mechanism drivers. Mechanism drivers in ML2 plugin can specify a list of QoS rule types which are supported. For example openvswitch mech_driver will report "bandwidth_limit_rule" (which is only for egress traffic in fact) and "dscp_marking" rule. There are two problems with such simple check:  - if rules are created or policy is applied to port there is no validation at all. It means that for example policy with dscp_marking rule can be applied to port bound with linuxbridge or sriov mech_drivers without any errors.  - there is no possibility that mechanism driver will support some rules only with specific parameters. For example bandwidth_limit_rule could have parameter "direction" with values "ingress" and "egress" and some of mechanism drivers could only supports bandwidth limit with direction=egress. We propose changes to how supported rule types are calculated and how validation works. Incompatible configurations or settings should be spotted as soon as possible, and stopped at API level with a clear explanation of what's going on. On corner cases where this can't be determined until bind time, binding should fail so the administrator won't have the wrong perception of the all the policy rules being applied, where it's not the case. Reporting of available rules should also be enhanced, so the tenant & admin will get a full picture of what's supported by port type (binding:vnic_type where the portbinding extension is available) $ neutron qos-rule-support-per-port-type +-----------------------------------------------------------------------+ | port_type | rules_and_parameters | +-----------+-----------------------------------------------------------+ | normal | {'dscp_marking': {'dscp_mark': []}, | | | 'bandwidth_limit' : {'max_kbps': [], | | | 'max_burst_kbps': [], | | | 'direction': ['ingress', 'egress']} | +-----------+-----------------------------------------------------------+ | direct | {'bandwidth_limit' {'max_kbps':[], | | | 'max_burst_kbps': [], | | | 'direction': ['egress'] | +-----------+-----------------------------------------------------------+ In plugins where no vnic_type is available, the only reporte port_type should be 'normal'. which means (normal): * dscp_marking rule is supported with parameter dscp_mark and it can have any value * bandwidth_limit rule is supported with parameters max_kbps and max_burst_kbps which can have any value and direction which can be ingress or egress only and means (direct aka.sr-iov port): * no dscp marking rule support * bandwidth limit supports only egress direction. Second change is change in validation of rules. During events like: * port_update * network_update * policy_update * rule_create * rule_update there will be called method from neutron.services.qos.notification_drivers.qos_base:QosServiceNotificationDriverBase.validate_policy_for_{port,network} which will raise exception if policy can't be applied on port or one of ports. For ML2 plugin it will raise exception if policy will contain any rule which is not supported by mechanism driver used for port binding or by mechanism drivers used by specific binding:vnic_type if port is unbound. Currently neutron API can return what rules are supported, in the context of ML2: by returning the minimum subset of rules supported by all mechanism drivers. Mechanism drivers in ML2 plugin can specify a list of QoS rule types which are supported. For example openvswitch mech_driver will report "bandwidth_limit_rule" (which is only for egress traffic in fact) and "dscp_marking" rule. There are two problems with such simple check:  - if rules are created or policy is applied to port there is no validation at all. It means that for example policy with dscp_marking rule can be applied to port bound with linuxbridge or sriov mech_drivers without any errors.  - there is no possibility that mechanism driver will support some rules only with specific parameters. For example bandwidth_limit_rule could have parameter "direction" with values "ingress" and "egress" and some of mechanism drivers could only supports bandwidth limit with direction=egress. We propose changes to how supported rule types are calculated and how validation works. Incompatible configurations or settings should be spotted as soon as possible, and stopped at API level with a clear explanation of what's going on. On corner cases where this can't be determined until bind time, binding should fail so the administrator won't have the wrong perception of the all the policy rules being applied, where it's not the case. Such Validation of rules should be made during events like: * port_update * network_update * policy_update * rule_create * rule_update there will be called method from neutron.services.qos.notification_drivers.qos_base:QosServiceNotificationDriverBase.validate_policy_for_{port,network} which will raise exception if policy can't be applied on port or one of ports. For ML2 plugin it will raise exception if policy will contain any rule which is not supported by mechanism driver used for port binding or by mechanism drivers used by specific binding:vnic_type if port is unbound.
2016-06-23 19:16:58 Miguel Angel Ajo neutron: status In Progress Confirmed
2016-07-05 08:09:42 Miguel Angel Ajo neutron: importance Wishlist Medium
2016-07-05 08:09:46 Miguel Angel Ajo neutron: status Confirmed Triaged
2016-07-05 20:18:27 OpenStack Infra neutron: status Triaged In Progress
2016-07-06 12:20:02 Miguel Angel Ajo neutron: status In Progress Triaged
2016-07-07 19:29:50 Armando Migliaccio neutron: importance Medium Wishlist
2016-07-12 19:33:12 OpenStack Infra neutron: status Triaged In Progress
2016-07-13 14:08:14 Nate Johnston neutron: status In Progress Triaged
2016-07-22 02:00:34 Armando Migliaccio neutron: milestone newton-3
2016-07-22 02:10:25 Armando Migliaccio tags qos rfe qos rfe-approved
2016-08-01 21:28:36 OpenStack Infra neutron: status Triaged In Progress
2016-09-01 20:12:37 Armando Migliaccio neutron: milestone newton-3 newton-rc1
2016-09-09 02:29:40 Armando Migliaccio neutron: milestone newton-rc1 ocata-1
2016-11-16 22:43:28 Armando Migliaccio neutron: milestone ocata-1 ocata-2
2016-12-02 01:39:40 Armando Migliaccio neutron: milestone ocata-2
2017-04-16 01:39:55 OpenStack Infra neutron: status In Progress Fix Released
2018-02-28 21:52:53 Akihiro Motoki neutron: milestone pike-2