[RFE] Qos policy not supporting sharing bandwidth between several nics of the same vm.

Bug #1858610 reported by Jie Li
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
neutron
Won't Fix
Undecided
Unassigned

Bug Description

Currently, network interface could be associated with an single qos policy, and the several nics of the same vm will work with their own associated networking policy independently. but in our production situation, taking into consideration of the usage of resource, we want to limit the total bandwidth of the vm(with multiple nics). In other words, we want the sum of the bandwidth of the several nics of the vm will be limited to the specified value. But so far, it seems that neutron don't have this feature.Is there any blueprint working on this future?

Revision history for this message
Slawek Kaplonski (slaweq) wrote :

Hi,

Currently there is no any work or proposal for such thing. In Neutron You can set bandwidth limit for port as port is the smaller "unit" in Neutron concepts. There is no concept of VM in Neutron so we can't associate QoS policy to the VM.
You can probably add various QoS policies to ports attached to single VM and in that way limit overall bandwidth of VM to desired value.

tags: added: rfe
Revision history for this message
Jie Li (neoareslinux) wrote :

Hi Slawek, in our production situation, we want to implement this feature. e.g:
VM A has two nic(nicA and nicB), we would like to limit the total bandwidth of nicA and nic B to 100mbits, and if nicA has no networking transmission, nicB will use all the 100mits qos poliocy. if nicA and nicB both has network transmission, then the sum of the bandwidth of the two nics will be limited to 100mbits.

we would like to add an new blueprint to implement this feature.
shared qos policy X will be associated with multiple nics(belong to one vm)

tags: added: qos
Revision history for this message
Slawek Kaplonski (slaweq) wrote :

@Jie Li: Blueprint is for now not needed.
Please describe in more details how You want to implement that (not too detailed as this will probably require spec later also), for which backend are You planning to implement this feature and how You are gonna to implement that in general.

Revision history for this message
Rodolfo Alonso (rodolfo-alonso-hernandez) wrote :

Hi @Lie:

You can also join the Neutron QoS IRC meetings [1].

As Slawek commented, the network backend used is important. But I also would like to see a proposal for the logical implementation:
- How to assign a QoS policy to several ports to be shared.
- How this "link" is going to be represented.
- If this "link" is broken, what is the behavior.

Another question is: is an interface bond enough for your proposes? Now the question is if we can apply a QoS rule on a bond.

Regards.

[1] http://eavesdrop.openstack.org/#Neutron_QoS_Meeting

Revision history for this message
Slawek Kaplonski (slaweq) wrote :

@Rodolfo,

But where do You want to have "bonds"? In the host where QoS rules are applied for each port (how?) or in the VM? If it's in the vm, how it could work for Neutron?

When I think about this concept, it seems that it will be very complex and we will probably need detailed spec proposed even before we will discuss that on drivers meeting. @Jie Li: can You maybe sync with Rodolfo and start such spec?

Hongbin Lu (hongbin.lu)
summary: - Qos policy not supporting sharing bandwidth between several nics of the
- same vm.
+ [RFE] Qos policy not supporting sharing bandwidth between several nics
+ of the same vm.
Revision history for this message
Jie Li (neoareslinux) wrote :

Hi @Slawek @Rodolfo:

I'd like to describe our thought simplify here, please let me know if something you don't agree with or have better idea. thanks.

[Backend Driver]
share-qos-policy will depends linux tc and ifb interface.

[API and Extensions]
1. add an new neutron extension shared-qos-policy
2. shared-qos-policy could be associated with several ports.
3. port could be associated with share-qos-policy or qos policy, but not both at the same time.
   In other words, if port has been associated with share-qos-policy, it can't be associated with qos policy anymore.

[Neutron Server]
If user use nova command to add/delete an new interface to/from vm. neutron-server also need to catch those event to check whether
wether the vm has port associated with shared-qos-policy, if so, the the new port will be binded(disassociated) to the shared-qos-policy too.

[DB Model]
Add three new db tables for shared-qos-policy api extension.

qos_shared_policy
+------------------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+------------------+--------------+------+-----+---------+-------+
| id | varchar(36) | NO | PRI | NULL | |
| name | varchar(255) | YES | | NULL | |
| project_id | varchar(255) | YES | MUL | NULL | |
| standard_attr_id | bigint(20) | NO | UNI | NULL | |
+------------------+--------------+------+-----+---------+-------+

qos_shared_policy_rules
+----------------+--------------------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+----------------+--------------------------+------+-----+---------+-------+
| id | varchar(36) | NO | PRI | NULL | |
| qos_policy_id | varchar(36) | NO | MUL | NULL | |
| max_kbps | int(11) | YES | | NULL | |
| max_burst_kbps | int(11) | YES | | NULL | |
| direction | enum('egress','ingress') | NO | | egress | |
+----------------+--------------------------+------+-----+---------+-------+

qos_shared_port_policy_binding
+-----------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-----------+-------------+------+-----+---------+-------+
| policy_id | varchar(36) | NO | MUL | NULL | |
| port_id | varchar(36) | NO | PRI | NULL | |
+-----------+-------------+------+-----+---------+-------+

[Agent(ovs-agent、linux-bridge-agent)]
backend drivers: linux tc、 ifb(Intermediate Functional Block).

shared-qos-policy will be associated to multiple ports, if ports belong to one vm, ovs-agent will redirect the traffic of those nics to ifb-X interface,
share-qos-policy bandwidth rule will be configured on ifb-X interface. after qos shaping, the trafiic will be send back to qovXXX interfaces.

Revision history for this message
Rodolfo Alonso (rodolfo-alonso-hernandez) wrote :

Hi @Jie:

About your last comment.

[DB model]
I don't see the need of this new QoS policy. Instead of this, what you should create is a QoS maxBW shared rule. The ports having this shared maxBW rule will share the network BW.
A QoS policy can't have both maxBW and shared maxBW; those rules are incompatibles. This should be checked first.

[Neutron server]
What you need to capture are the port updates/creation. When a port receives a QoS policy with a shared maxBW rule, you need to check if the port is bound. If so, then you need to send an update to the driver to handle this.
Another problem could be what to do with ports having the same shared maxBW rule but attached to different VMs or hosts. In this case, how are you going to apply the maxBW shared rule? Are you going to block the port binding?

[Linux Bridge]
What you propose is to join and redirect all tap devices through a IFB block, to control in a single place the BW of all ports. This has a disadvantage: all the traffic associated to those VM ports will be funnelled through this IFB block. This can reduce the max I/O throughput but this could be acceptable for this feature.

[OVS]
I don't agree with implementing the same solution as in LB. This architecture [1] was abandoned in favor of connecting the TAP device directly to the integration bridge. This architecture is compatible with OVS-DPDK and HW offload features. You need to make a good architecture proposal here to make this change compatible with current OVS features and VNIC types, and for being accepted in the Neutron community.

Regards.

[1] https://docs.openstack.org/neutron/pike/contributor/internals/openvswitch_agent.html

Revision history for this message
Jie Li (neoareslinux) wrote :

Hi Rodolfo:

[DB model]
we will extend an new rule(shared maxBW rule) instead of extending an new qos policy

[Neutron server]
currently, we assume that if qos policy include an shared maxBW rule, when associating that policy to port, we should check the device_id of the port to make sure the policy associate to the ports of the same vm.

[OVS]
I need some more time to make a architecture compatible with current OVS features and VNIC types. It will be posted later.

Revision history for this message
Rodolfo Alonso (rodolfo-alonso-hernandez) wrote :

He Jie:

[Neutron server]
A port with QoS policy can be *NOT* bound to a VM, just created in the Neutron DB. This check is not valid. In the spec, if you want to push forward this RFE, you need to handle those situations:
- Ports with shared maxBW with/without bind.
- Ports with shared maxBW with different device_ids

Regards.

Revision history for this message
Jie Li (neoareslinux) wrote :

Hi Rodolfo:
[ovs-dpdk]
network packets will not go through hypervisor host's kernel, so linux tc will not work under this
architecture. ovs-dpdk has qos function, but it depends on dpdk liberity. so if we want to implement shared-qos-policy with dpdk lib, it will be complicated.

[sriov]
users use sriov with openstack want to speed up the network performance, so i think it may be inappropriate to redirect network packets to ifb interface. As you mentioned above, it will reduce I/O.

we probably implement the first version without considering the ovs-dpdk and sriov.

Revision history for this message
Slawek Kaplonski (slaweq) wrote :

Ok, lets discuss that on one of the next drivers meetings: http://eavesdrop.openstack.org/#Neutron_drivers_Meeting but probably we will need spec with more details.

tags: added: rfe-triaged
removed: rfe
Revision history for this message
Slawek Kaplonski (slaweq) wrote :

We were discussing this RFE on last drivers meeting on 28.02.2020 and we decided to approve the RFE as in idea.
But as implementation for this is going to be complex and different backends needs different ways to support it, we will need detailed spec and PoC for that first. So please work on spec and PoC first.

tags: added: rfe-approved
removed: rfe-triaged
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to neutron-specs (master)

Related fix proposed to branch: master
Review: https://review.opendev.org/723384

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on neutron-specs (master)

Change abandoned by Jie Li (<email address hidden>) on branch: master
Review: https://review.opendev.org/723384

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Change abandoned by "Rodolfo Alonso <email address hidden>" on branch: master
Review: https://review.opendev.org/c/openstack/neutron-specs/+/723384
Reason: Patch abandoned due to the lack of activity. Please feel free to restore it if needed.

Revision history for this message
Rodolfo Alonso (rodolfo-alonso-hernandez) wrote :

Bug closed due to the lack of activity. Please feel free to reopen if needed.

Changed in neutron:
status: New → Won't Fix
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.