[RFE] OpenVSwitch HTB QoS not efficient for a lot of interfaces

Bug #2052906 reported by Ilia Baikov
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
neutron
New
Wishlist
Unassigned

Bug Description

Hello,
Recently i've found out that linux-htb is hardcoded to QoS section in neutron-agent's code. For a small workloads it's enough but in case there is 200+ interfaces this makes overall performance to suffer due to high irq. I guess we could try to use mqprio in order to balance tc's flows over multiple queues but is it supported scenario for neutron-agent ? I'm really not sure that DPDK able to help in this case.

Conditions:
OpenStack Zed
Openvswitch (vanilla, not a DPDK)
400 interfaces with QoS on each

Tags: qos rfe
tags: added: rfe
summary: - OpenVSwitch HTB QoS not efficient for a lot of interfaces
+ [RFE] OpenVSwitch HTB QoS not efficient for a lot of interfaces
tags: added: qos
Revision history for this message
Bence Romsics (bence-romsics) wrote :

Hi,

Thanks for your report.

I tagged this report as a Request For Enhancement (RFE). The usual process for an RFE is to discuss its validity, scope and high level plans on the Neutron Drivers meeting, which is held on Fridays over IRC:

https://meetings.opendev.org/#Neutron_drivers_Meeting
https://wiki.openstack.org/wiki/Meetings/NeutronDrivers

Likely we can schedule this topic for the next meeting, however please find the exact meeting agenda and schedule announced on the openstack-discuss mailing list, usually the day before:

https://lists.openstack.org/mailman3/lists/openstack-discuss.lists.openstack.org/

Please join if you can. You're welcome to present your ideas and plans. Please also consider if you can contribute to this enhancement.

Of course feel free to add further details or questions here too.

Best regards,
Bence

Changed in neutron:
importance: Undecided → Wishlist
Revision history for this message
Brian Haley (brian-haley) wrote :

Ilia - would you be able to attend the Drivers meeting tomorrow, Friday February 16th to discuss this? Please see the link Bence provided for info.

Revision history for this message
LIU Yulong (dragon889) wrote (last edit ):

Alternatively, QoS with ovs meter can support bandwitdh and packet rate limit:
https://review.opendev.org/q/topic:%22bug/1964342%22
https://review.opendev.org/q/topic:%22packet_rate_limit%22

Revision history for this message
Ilia Baikov (iliabaikov) wrote :

Hello, LIU
In case OvS with NIC offload will let drop TC usage i think this is the solution. I have active stands with a lot of interfaces (let's say about 300-400 interfaces with QoS configured) so i could test it and give feedback. LMK if you need any help

Revision history for this message
LIU Yulong (dragon889) wrote :

Regarding to the offloading, some vendor's NIC may not support the flow pipeline has 2 meter actions. AKA, you should not enable meter for bandwidth and packet rate limit at the same time.

Revision history for this message
Ilia Baikov (iliabaikov) wrote :

So as i can see, OvS able to use DPDK for QoS so it could be leveraged to DPDK, not sure about pps but i can't guess for what pps limit could be used.

"It is possible to apply both ingress and egress limiting when using the DPDK datapath. These are referred to as QoS and Rate Limiting, respectively."

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.