Activity log for bug #1998608

Date Who What changed Old value New value Message
2022-12-02 14:21:59 Rodolfo Alonso bug added bug
2022-12-02 14:22:05 Rodolfo Alonso neutron: assignee Rodolfo Alonso (rodolfo-alonso-hernandez)
2022-12-02 14:22:13 Rodolfo Alonso tags ovn qos rfe
2022-12-02 14:22:24 Rodolfo Alonso neutron: importance Undecided Wishlist
2022-12-02 14:42:45 OpenStack Infra neutron: status New In Progress
2022-12-16 14:44:50 Rodolfo Alonso tags ovn qos rfe ovn qos rfe-approved
2023-01-30 09:43:56 Rodolfo Alonso description The OVN architecture allows to avoid spawning compute agents like L3, DHCP or the specific ML2 agent (OVS, SRIOV, etc). This is because the local OVS service is configured by the local ovn-controller running on each node. This ovn-controller reads from the OVN SB database and executed the needed changes in the local OVS database and OF tables. However that removes the ability of Neutron to directly interact with each compute node. For example, in [1][2] I proposed a POC of how to implement the ML2/OVS QoS extension for HW offloaded ports. With the current driver implementations, the HW offloaded ports do not apply the QoS rules on the VF of the port representors. That means we have a parity gap between virtio ports and HWOL ports. While with ML2/OVS this gap can be closed with [1][2] as a workaround, in ML2/OVN this is not feasible. This RFE proposed to implement an OVN monitor agent that will run on each compute node, **if needed**, that will implement generic features/tools/operations not yet provided by ovn-controller or the interface drivers. The first feature to be implemented is the QoS extension for the port representor ports; in particular egress bandwidth limit rules and egress minimum bandwidth rules. [1]https://review.opendev.org/c/openstack/neutron/+/815037 [2]https://review.opendev.org/c/openstack/neutron/+/816537 The OVN architecture allows to avoid spawning compute agents like L3, DHCP or the specific ML2 agent (OVS, SRIOV, etc). This is because the local OVS service is configured by the local ovn-controller running on each node. This ovn-controller reads from the OVN SB database and executed the needed changes in the local OVS database and OF tables. However that removes the ability of Neutron to directly interact with each compute node. For example, in [1][2] I proposed a POC of how to implement the ML2/OVS QoS extension for HW offloaded ports. With the current driver implementations, the HW offloaded ports do not apply the QoS rules on the VF of the port representors. That means we have a parity gap between virtio ports and HWOL ports. While with ML2/OVS this gap can be closed with [1][2] as a workaround, in ML2/OVN this is not feasible. This RFE proposed to implement an OVN monitor agent that will run on each compute node, **if needed**, that will implement generic features/tools/operations not yet provided by ovn-controller or the interface drivers. The first feature to be implemented is the QoS extension for the port representor ports; in particular egress bandwidth limit rules and egress minimum bandwidth rules. [1]https://review.opendev.org/c/openstack/neutron/+/815037 [2]https://review.opendev.org/c/openstack/neutron/+/816537 Related bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2165497
2023-01-30 09:44:05 Rodolfo Alonso description The OVN architecture allows to avoid spawning compute agents like L3, DHCP or the specific ML2 agent (OVS, SRIOV, etc). This is because the local OVS service is configured by the local ovn-controller running on each node. This ovn-controller reads from the OVN SB database and executed the needed changes in the local OVS database and OF tables. However that removes the ability of Neutron to directly interact with each compute node. For example, in [1][2] I proposed a POC of how to implement the ML2/OVS QoS extension for HW offloaded ports. With the current driver implementations, the HW offloaded ports do not apply the QoS rules on the VF of the port representors. That means we have a parity gap between virtio ports and HWOL ports. While with ML2/OVS this gap can be closed with [1][2] as a workaround, in ML2/OVN this is not feasible. This RFE proposed to implement an OVN monitor agent that will run on each compute node, **if needed**, that will implement generic features/tools/operations not yet provided by ovn-controller or the interface drivers. The first feature to be implemented is the QoS extension for the port representor ports; in particular egress bandwidth limit rules and egress minimum bandwidth rules. [1]https://review.opendev.org/c/openstack/neutron/+/815037 [2]https://review.opendev.org/c/openstack/neutron/+/816537 Related bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2165497 The OVN architecture allows to avoid spawning compute agents like L3, DHCP or the specific ML2 agent (OVS, SRIOV, etc). This is because the local OVS service is configured by the local ovn-controller running on each node. This ovn-controller reads from the OVN SB database and executed the needed changes in the local OVS database and OF tables. However that removes the ability of Neutron to directly interact with each compute node. For example, in [1][2] I proposed a POC of how to implement the ML2/OVS QoS extension for HW offloaded ports. With the current driver implementations, the HW offloaded ports do not apply the QoS rules on the VF of the port representors. That means we have a parity gap between virtio ports and HWOL ports. While with ML2/OVS this gap can be closed with [1][2] as a workaround, in ML2/OVN this is not feasible. This RFE proposed to implement an OVN monitor agent that will run on each compute node, **if needed**, that will implement generic features/tools/operations not yet provided by ovn-controller or the interface drivers. The first feature to be implemented is the QoS extension for the port representor ports; in particular egress bandwidth limit rules and egress minimum bandwidth rules. [1]https://review.opendev.org/c/openstack/neutron/+/815037 [2]https://review.opendev.org/c/openstack/neutron/+/816537 Related Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2165497