ovs plugin performance issue
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Nova |
Confirmed
|
Undecided
|
yong sheng gong |
Bug Description
To enable security groups which is based on iptables, we configured hybrid vif-driver when using ovs plugin in our DC.
During networking performance test, there is a distinct bandwidth loss.
Test scenario: 2 VM resides on the same compute node, and they belong to the same L2 network. When using hybrid vif-driver (LibvirtHybridO
Probably we should consider this bandwidth loss, and figure out a solution. Thanks.
Test result:
*******
root@vm-A-1:~# iperf -s
-------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
-------
[ 4] local 128.100.200.66 port 5001 connected with 128.100.200.44 port 35483
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-180.0 sec 49.0 GBytes 2.34 Gbits/sec
root@vm-A-1:~# iperf -s
-------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
-------
[ 4] local 128.100.200.33 port 5001 connected with 128.100.200.22 port 41452
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-180.0 sec 267 GBytes 12.7 Gbits/sec
*******
Bug #1039400
https:/
- Add a vif-driver that is a hybrid of the existing Open vswitch +
linux bridge drivers, which allows OVS quantum plugins to
be compatible with iptables based filtering, in particular, nova
security groups.
Changed in neutron: | |
status: | New → Confirmed |
assignee: | nobody → yong sheng gong (gongysh) |
If basic request/response performance between the two VMs on the same compute node remains the same between the two driver types remains the same (eg netperf TCP_RR) , you might look to whether the "stateless offloads" are lost between the two - things like CKO, TSO/GSO and/or GRO.
If the likes of netperf TCP_RR (or its equivalent) is very different between the two driver versions, then there is likely a non-trivial path-length difference, at least some of which may be visible in the results of a "perf" profile of the compute node.