MOS 9.2 ovs-dpdk performance test
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Fuel for OpenStack |
Invalid
|
High
|
Xiwen Deng |
Bug Description
In MOS 9.2, when test RFC2544 zero frame lossing, dpdk performance result is low.
In env there are two dpdk interfaces and a VM. VM have two nics and each nic have two queues. And dpdk interface config two queues too.
Configures of Env below:
top -p `pidof ovs-vswitchd` -H -d1
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
15924 root 10 -10 27.159g 296408 9956 R 99.9 0.1 34:34.80 pmd222
15925 root 10 -10 27.159g 296408 9956 R 99.9 0.1 34:34.80 pmd219
15922 root 10 -10 27.159g 296408 9956 R 99.8 0.1 34:34.79 pmd223
15923 root 10 -10 27.159g 296408 9956 R 99.8 0.1 34:34.79 pmd224
15928 root 10 -10 27.159g 296408 9956 R 99.8 0.1 34:34.80 pmd218
15929 root 10 -10 27.159g 296408 9956 R 99.8 0.1 34:34.79 pmd221
15930 root 10 -10 27.159g 296408 9956 R 99.8 0.1 34:34.79 pmd220
15931 root 10 -10 27.159g 296408 9956 R 99.8 0.1 34:34.80 pmd217
root@compute-3:~# ovs-vsctl get open_vswitch . other_config
{dpdk-extra="-n 2 --vhost-owner libvirt-qemu:kvm --vhost-perm 0664", dpdk-init="true", dpdk-lcore-
root@compute-3:~# ovs-appctl dpif-netdev/
pmd thread numa_id 1 core_id 35:
isolated : true
port: vhu82a81743-88 queue-id: 1
pmd thread numa_id 1 core_id 33:
isolated : true
port: dpdk0 queue-id: 1
pmd thread numa_id 1 core_id 14:
isolated : true
port: dpdk1 queue-id: 0
pmd thread numa_id 1 core_id 15:
isolated : true
port: vhu82a81743-88 queue-id: 0
pmd thread numa_id 1 core_id 16:
isolated : true
port: vhueee4b3fb-32 queue-id: 0
pmd thread numa_id 1 core_id 34:
isolated : true
port: dpdk1 queue-id: 1
pmd thread numa_id 1 core_id 13:
isolated : true
port: dpdk0 queue-id: 0
pmd thread numa_id 1 core_id 36:
isolated : true
port: vhueee4b3fb-32 queue-id: 1
root@compute-3:~# ovs-appctl dpif-netdev/
pmd thread numa_id 1 core_id 35:
emc hits:0
megaflow hits:0
avg. subtable lookups per hit:0.00
miss:0
lost:0
polling cycles:183391851052 (100.00%)
processing cycles:0 (0.00%)
pmd thread numa_id 1 core_id 33:
emc hits:5169955
megaflow hits:0
avg. subtable lookups per hit:0.00
miss:500
lost:0
polling cycles:123787408521 (80.77%)
processing cycles:29463516747 (19.23%)
avg cycles per packet: 29639.74 (153250925268/
avg processing cycles per packet: 5698.44 (29463516747/
main thread:
emc hits:3
megaflow hits:0
avg. subtable lookups per hit:0.00
miss:0
lost:0
polling cycles:21534522 (99.88%)
processing cycles:25472 (0.12%)
avg cycles per packet: 7186664.67 (21559994/3)
avg processing cycles per packet: 8490.67 (25472/3)
pmd thread numa_id 1 core_id 14:
emc hits:5160183
megaflow hits:0
avg. subtable lookups per hit:0.00
miss:502
lost:2
polling cycles:125545461034 (80.41%)
processing cycles:30583715341 (19.59%)
avg cycles per packet: 30253.58 (156129176375/
avg processing cycles per packet: 5926.29 (30583715341/
pmd thread numa_id 1 core_id 15:
emc hits:0
megaflow hits:0
avg. subtable lookups per hit:0.00
miss:0
lost:0
polling cycles:182558896290 (100.00%)
processing cycles:0 (0.00%)
pmd thread numa_id 1 core_id 16:
emc hits:0
megaflow hits:0
avg. subtable lookups per hit:0.00
miss:0
lost:0
polling cycles:182211680516 (100.00%)
processing cycles:0 (0.00%)
pmd thread numa_id 1 core_id 34:
emc hits:5162238
megaflow hits:0
avg. subtable lookups per hit:0.00
miss:501
lost:1
polling cycles:123743090602 (80.82%)
processing cycles:29366438623 (19.18%)
avg cycles per packet: 29656.65 (153109529225/
avg processing cycles per packet: 5688.15 (29366438623/
pmd thread numa_id 1 core_id 13:
emc hits:5167918
megaflow hits:0
avg. subtable lookups per hit:0.00
miss:500
lost:0
polling cycles:125398694242 (80.39%)
processing cycles:30587837876 (19.61%)
avg cycles per packet: 30180.71 (155986532118/
avg processing cycles per packet: 5918.22 (30587837876/
pmd thread numa_id 1 core_id 36:
emc hits:0
megaflow hits:0
avg. subtable lookups per hit:0.00
miss:0
lost:0
polling cycles:182655421216 (100.00%)
processing cycles:0 (0.00%)
From the pmd-stats-show we can find some pmd pin cores(15,16,35,36) are not polling. Only four PMD pinning core process packets.
Why only four pmd pin cores process packets?
description: | updated |
description: | updated |
Changed in fuel: | |
status: | Incomplete → Invalid |
Xiwen, could you please provide an example of expected behavior? How low the dpdk performance? Do you experience packet loss, heavy jitter or maybe increased latency? What are the throughput numbers? Are you measuring the forwarding inside a VM? Or is it bridging? Are there any iptables/ebtables rules inside this VM? It would also be great to show some kind of a deployment scheme. Please answer all the questions above.