[QoS] QoS policy doesn't restrict udp traffic

Bug #1561872 reported by Kristina Berezovskaia
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Mirantis OpenStack
Invalid
High
MOS Neutron

Bug Description

Detailed bug description:

After adding qos policy fro vm's port, udp traffic are not restricted

Steps to reproduce:

1) launch ubuntu vm1 in admin_internal_net
2) launch ubuntu vm1 in admin_internal_net
3) create qos policy:
 neutron qos-policy-create p_for_vm1
4) create qos rule
neutron qos-bandwidth-limit-rule-create p_for_vm1 --max-kbps 2500 --max-burst-kbps 300
5)update port with new qos policy for vm1
neutron port-list | grep 192.168.0.9
| 96243790-ca73-4748-aa6a-51141d77f949 | | fa:16:3e:55:b3:44 | {"subnet_id": "39e02322-9972-4abd-b491-8ee682c4e351", "ip_address": "192.168.0.9"} |
root@node-6:~# neutron port-update --qos-policy p_for_vm1 96243790-ca73-4748-aa6a-51141d77f949
6) start tcp traffic with iperf from vm1 to vm2:
Expected and actual results: traffic is restricted
On vm1
ubuntu@vm1:~$ iperf3 --client 192.168.0.10 --time 60 -i 1
Connecting to host 192.168.0.10, port 5201
[ 4] local 192.168.0.9 port 54536 connected to 192.168.0.10 port 5201
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 1.26 MBytes 10.6 Mbits/sec 216 32.5 KBytes
[ 4] 1.00-2.00 sec 385 KBytes 3.15 Mbits/sec 74 7.07 KBytes
[ 4] 2.00-3.00 sec 348 KBytes 2.85 Mbits/sec 41 5.66 KBytes
[ 4] 3.00-4.00 sec 354 KBytes 2.90 Mbits/sec 32 5.66 KBytes
[ 4] 4.00-5.00 sec 342 KBytes 2.80 Mbits/sec 38 5.66 KBytes
[ 4] 5.00-6.00 sec 0.00 Bytes 0.00 bits/sec 40 5.66 KBytes
[ 4] 6.00-7.00 sec 277 KBytes 2.27 Mbits/sec 31 5.66 KBytes
[ 4] 7.00-8.00 sec 280 KBytes 2.29 Mbits/sec 39 5.66 KBytes
[ 4] 8.00-9.00 sec 280 KBytes 2.29 Mbits/sec 40 5.66 KBytes

On vm2:
Accepted connection from 192.168.0.9, port 54535
[ 5] local 192.168.0.10 port 5201 connected to 192.168.0.9 port 54536
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-1.00 sec 539 KBytes 4.41 Mbits/sec
[ 5] 1.00-2.00 sec 314 KBytes 2.57 Mbits/sec
[ 5] 2.00-3.00 sec 324 KBytes 2.65 Mbits/sec
[ 5] 3.00-4.00 sec 255 KBytes 2.08 Mbits/sec
[ 5] 4.00-5.00 sec 321 KBytes 2.63 Mbits/sec
[ 5] 5.00-6.00 sec 322 KBytes 2.64 Mbits/sec
[ 5] 6.00-7.00 sec 256 KBytes 2.10 Mbits/sec
[ 5] 7.00-8.00 sec 320 KBytes 2.62 Mbits/sec

7) Update rule for 400 Kbits (iperf udp traffic is 1M by default)
8) start udp traffic with iperf from vm1 to vm2
Expected results: traffic is restricted
Actual result: traffic isn't restricted

Iperf udp (qos restriction is 400 + 300 but we see that traffic is about 1M)
  on vm1
ubuntu@vm1:~$ iperf3 --client 192.168.0.10 --time 60 -i 1 -u
Connecting to host 192.168.0.10, port 5201
[ 4] local 192.168.0.9 port 54647 connected to 192.168.0.10 port 5201
[ ID] Interval Transfer Bandwidth Total Datagrams
[ 4] 0.00-1.00 sec 120 KBytes 983 Kbits/sec 15
[ 4] 1.00-2.00 sec 128 KBytes 1.05 Mbits/sec 16
[ 4] 2.00-3.00 sec 128 KBytes 1.05 Mbits/sec 16
[ 4] 3.00-4.00 sec 128 KBytes 1.05 Mbits/sec 16
[ 4] 4.00-5.00 sec 128 KBytes 1.05 Mbits/sec 16
[ 4] 5.00-6.00 sec 128 KBytes 1.05 Mbits/sec 16
[ 4] 6.00-7.00 sec 128 KBytes 1.05 Mbits/sec 16

 on vm2
Accepted connection from 192.168.0.9, port 54542
[ 5] local 192.168.0.10 port 5201 connected to 192.168.0.9 port 54647
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
[ 5] 0.00-1.00 sec 120 KBytes 983 Kbits/sec 1.138 ms 0/15 (0%)
[ 5] 1.00-2.00 sec 128 KBytes 1.05 Mbits/sec 0.431 ms 0/16 (0%)
[ 5] 2.00-3.00 sec 64.0 KBytes 524 Kbits/sec 0.263 ms 7/15 (47%)
[ 5] 3.00-4.00 sec 48.0 KBytes 393 Kbits/sec 0.196 ms 10/16 (62%)
[ 5] 4.00-5.00 sec 48.0 KBytes 393 Kbits/sec 0.150 ms 10/16 (62%)
[ 5] 5.00-6.00 sec 48.0 KBytes 393 Kbits/sec 0.115 ms 10/16 (62%)
[ 5] 6.00-7.00 sec 48.0 KBytes 393 Kbits/sec 0.094 ms 10/16 (62%)

Iperf udp (q but we see that traffic is about 1M)
On vm1:
ubuntu@vm1:~$ iperf3 --client 192.168.0.10 --time 60 -i 1 -u
Connecting to host 192.168.0.10, port 5201
[ 4] local 192.168.0.9 port 46589 connected to 192.168.0.10 port 5201
[ ID] Interval Transfer Bandwidth Total Datagrams
[ 4] 0.00-1.00 sec 120 KBytes 983 Kbits/sec 15
[ 4] 1.00-2.00 sec 128 KBytes 1.05 Mbits/sec 16
[ 4] 2.00-3.00 sec 128 KBytes 1.05 Mbits/sec 16
[ 4] 3.00-4.00 sec 128 KBytes 1.05 Mbits/sec 16
[ 4] 4.00-5.00 sec 128 KBytes 1.05 Mbits/sec 16
[ 4] 5.00-6.00 sec 128 KBytes 1.05 Mbits/sec 16

On vm2
[ 5] local 192.168.0.10 port 5201 connected to 192.168.0.9 port 46589
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
[ 5] 0.00-1.00 sec 120 KBytes 983 Kbits/sec 1.136 ms 0/15 (0%)
[ 5] 1.00-2.00 sec 128 KBytes 1.05 Mbits/sec 0.427 ms 0/16 (0%)
[ 5] 2.00-3.00 sec 128 KBytes 1.05 Mbits/sec 0.172 ms 0/16 (0%)
[ 5] 3.00-4.00 sec 128 KBytes 1.05 Mbits/sec 0.084 ms 0/16 (0%)
[ 5] 4.00-5.00 sec 128 KBytes 1.05 Mbits/sec 0.050 ms 0/16 (0%)
[ 5] 5.00-6.00 sec 128 KBytes 1.05 Mbits/sec 0.042 ms 0/16 (0%)
[ 5] 5.00-6.00 sec 128 KBytes 1.05 Mbits/sec 0.042 ms 0/16 (0%)

Iperf with bandwith (os restriction is abot 2.5-2.8M, restriction start working on vm2 and qos-policy is still for vm1)
On vm1:
ubuntu@vm1:~$ iperf3 --client 192.168.0.10 --time 60 -i 1 -u --bandwidth 5M
Connecting to host 192.168.0.10, port 5201
[ 4] local 192.168.0.9 port 58795 connected to 192.168.0.10 port 5201
[ ID] Interval Transfer Bandwidth Total Datagrams
[ 4] 0.00-1.00 sec 552 KBytes 4.52 Mbits/sec 69
[ 4] 1.00-2.00 sec 608 KBytes 4.98 Mbits/sec 76
[ 4] 2.00-3.00 sec 616 KBytes 5.05 Mbits/sec 77
[ 4] 3.00-4.00 sec 608 KBytes 4.98 Mbits/sec 76
[ 4] 4.00-5.00 sec 608 KBytes 4.98 Mbits/sec 76
[ 4] 5.00-6.00 sec 616 KBytes 5.05 Mbits/sec 77
[ 4] 6.00-7.00 sec 608 KBytes 4.98 Mbits/sec 76
[ 4] 7.00-8.00 sec 608 KBytes 4.98 Mbits/sec 76
[ 4] 8.00-9.00 sec 616 KBytes 5.05 Mbits/sec 77
[ 4] 9.00-10.00 sec 608 KBytes 4.98 Mbits/sec 76

On vm2:
Accepted connection from 192.168.0.9, port 54539
[ 5] local 192.168.0.10 port 5201 connected to 192.168.0.9 port 58795
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
[ 5] 0.00-1.00 sec 544 KBytes 4.46 Mbits/sec 0.052 ms 0/68 (0%)
[ 5] 1.00-2.00 sec 304 KBytes 2.49 Mbits/sec 0.032 ms 36/74 (49%)
[ 5] 2.00-3.00 sec 304 KBytes 2.49 Mbits/sec 0.032 ms 38/76 (50%)
[ 5] 3.00-4.00 sec 304 KBytes 2.49 Mbits/sec 0.034 ms 38/76 (50%)
[ 5] 4.00-5.00 sec 304 KBytes 2.49 Mbits/sec 0.015 ms 39/77 (51%)
[ 5] 5.00-6.00 sec 304 KBytes 2.49 Mbits/sec 0.018 ms 38/76 (50%)
[ 5] 6.00-7.00 sec 304 KBytes 2.49 Mbits/sec 0.012 ms 38/76 (50%)
[ 5] 7.00-8.00 sec 304 KBytes 2.49 Mbits/sec 0.016 ms 39/77 (51%)
[ 5] 8.00-9.00 sec 304 KBytes 2.49 Mbits/sec 0.015 ms 38/76 (50%)
[ 5] 9.00-10.00 sec 304 KBytes 2.49 Mbits/sec 0.020 ms 38/76 (50%)

Reproducibility:
Reproduce continuously on iso 45, 96, 101

Description of the environment:
 Host | Reporter | Report |
+---------------------------+-------------------------------------------------------------------+------------------------------------------------------------------------------------+
| nailgun.test.domain.local | cat /etc/fuel_build_id | 101 |
| | | |
| nailgun.test.domain.local | cat /etc/fuel_build_number | 101 |
| | | |
| nailgun.test.domain.local | cat /etc/fuel_release | 9.0 |
| | | |
| nailgun.test.domain.local | cat /etc/fuel_openstack_version | liberty-9.0

affects: fuel → mos
Changed in mos:
milestone: 9.0 → none
milestone: none → 9.0
Changed in mos:
status: New → Confirmed
tags: added: area-build
tags: added: area-neutron
removed: area-build
Revision history for this message
Sergii (sgudz) wrote :

Works as expected, due to the way UPD works throughput should be checked on receiving host

Changed in mos:
status: Confirmed → Invalid
Revision history for this message
Alexander Ignatov (aignatov) wrote :

Kristina, this behaviour works as expected for upd traffic, traffic measurements on iperf client side shows bandwith of clients itself before traffic goes into vm1 port in your case. The actual result is what just "iperf server" stats. In other words in your case of limiting upd traffic down to 400+300 kbps works as designed, client should show 1mbps as it's iperf default and iperf server should show 400+(~300).

I've checked it in advance in latetst code:

$ neutron qos-bandwidth-limit-rule-update c2a4f504-b9cb-43e8-ad38-056a99bf6fa7 p_for_vm1 --max-kbps 400 --max-burst-kbps 300
Updated bandwidth_limit_rule: c2a4f504-b9cb-43e8-ad38-056a99bf6fa7
$ neutron qos-bandwidth-limit-rule-list p_for_vm1
+--------------------------------------+----------------+----------+
| id | max_burst_kbps | max_kbps |
+--------------------------------------+----------------+----------+
| c2a4f504-b9cb-43e8-ad38-056a99bf6fa7 | 300 | 400 |
+--------------------------------------+----------------+----------+

root@vm1:/home/ubuntu# iperf --client 10.0.0.8 --time 20 -i 1 -u
------------------------------------------------------------
Client connecting to 10.0.0.8, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size: 208 KByte (default)
------------------------------------------------------------
[ 3] local 10.0.0.6 port 49436 connected with 10.0.0.8 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 1.0 sec 128 KBytes 1.05 Mbits/sec
[ 3] 1.0- 2.0 sec 129 KBytes 1.06 Mbits/sec
[ 3] 2.0- 3.0 sec 128 KBytes 1.05 Mbits/sec
[ 3] 3.0- 4.0 sec 128 KBytes 1.05 Mbits/sec
[ 3] 4.0- 5.0 sec 128 KBytes 1.05 Mbits/sec
[ 3] 5.0- 6.0 sec 128 KBytes 1.05 Mbits/sec
[ 3] 6.0- 7.0 sec 129 KBytes 1.06 Mbits/sec
[ 3] 7.0- 8.0 sec 128 KBytes 1.05 Mbits/sec
[ 3] 8.0- 9.0 sec 128 KBytes 1.05 Mbits/sec
[ 3] 9.0-10.0 sec 128 KBytes 1.05 Mbits/sec
[ 3] 10.0-11.0 sec 128 KBytes 1.05 Mbits/sec
[ 3] 11.0-12.0 sec 128 KBytes 1.05 Mbits/sec
[ 3] 12.0-13.0 sec 128 KBytes 1.05 Mbits/sec
[ 3] 13.0-14.0 sec 129 KBytes 1.06 Mbits/sec
[ 3] 14.0-15.0 sec 128 KBytes 1.05 Mbits/sec
[ 3] 15.0-16.0 sec 128 KBytes 1.05 Mbits/sec
[ 3] 16.0-17.0 sec 128 KBytes 1.05 Mbits/sec
[ 3] 17.0-18.0 sec 128 KBytes 1.05 Mbits/sec
[ 3] 18.0-19.0 sec 129 KBytes 1.06 Mbits/sec
[ 3] 19.0-20.0 sec 128 KBytes 1.05 Mbits/sec
[ 3] 0.0-20.0 sec 2.50 MBytes 1.05 Mbits/sec
[ 3] Sent 1785 datagrams
[ 3] Server Report:
[ 3] 0.0-20.3 sec 1.22 MBytes 504 Kbits/sec 16.402 ms 916/ 1785 (51%)

504 Kbits/sec is ok in this case ^^

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.