ovs plugin performance issue

Bug #1223267 reported by Robin Wang
74
This bug affects 8 people
Affects Status Importance Assigned to Milestone
Nova
Confirmed
Undecided
yong sheng gong

Bug Description

To enable security groups which is based on iptables, we configured hybrid vif-driver when using ovs plugin in our DC.

During networking performance test, there is a distinct bandwidth loss.

Test scenario: 2 VM resides on the same compute node, and they belong to the same L2 network. When using hybrid vif-driver (LibvirtHybridOVSBridgeDriver), throughput is 2.34Gb/s. When using common vif-driver(LibvirtOpenVswitchVirtualPortDriver), throughput is 12.7Gb/s.

Probably we should consider this bandwidth loss, and figure out a solution. Thanks.

Test result:
**************************************
root@vm-A-1:~# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[ 4] local 128.100.200.66 port 5001 connected with 128.100.200.44 port 35483
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-180.0 sec 49.0 GBytes 2.34 Gbits/sec

root@vm-A-1:~# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[ 4] local 128.100.200.33 port 5001 connected with 128.100.200.22 port 41452
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-180.0 sec 267 GBytes 12.7 Gbits/sec

**************************************
Bug #1039400

https://bugs.launchpad.net/neutron/+bug/1039400

    - Add a vif-driver that is a hybrid of the existing Open vswitch +
    linux bridge drivers, which allows OVS quantum plugins to
    be compatible with iptables based filtering, in particular, nova
    security groups.

Tags: performance
Revision history for this message
rick jones (perfgeek) wrote :

If basic request/response performance between the two VMs on the same compute node remains the same between the two driver types remains the same (eg netperf TCP_RR) , you might look to whether the "stateless offloads" are lost between the two - things like CKO, TSO/GSO and/or GRO.

If the likes of netperf TCP_RR (or its equivalent) is very different between the two driver versions, then there is likely a non-trivial path-length difference, at least some of which may be visible in the results of a "perf" profile of the compute node.

Revision history for this message
Robin Wang (robinwang) wrote :

Thanks Rick for the advice.

Path-length is one influence factor. Under hybird vif-driver, packet traverses through 2 extra linux bridges, to get to destination VM.

Also, we strongly suspect 2 extra veth device pairs might be another influence factor. As shown in our test, the performance of veth device is not as good as ovs patch port.

Revision history for this message
Robin Wang (robinwang) wrote :

Add experts in loop, and ping for suggestions.

Some issues that we want to confirm:

* This issue is found in Grizzly deployment. Is there any improvement planed or already implemented in Havana or future release?

* The test scenario shows this issue affects VM on the same compute node. However, if root cause is long path length(2 extra linux bridges) and 2 extra veth pairs, this performance reduction should also affect those VM on different compute nodes.

* If the design remains the same in Havana, user cloud not fully utilize their 10GE environment.

* OVS plugin is pretty popular in OpenStack deployment. Suggest to keep this issue on track and find a solution eventually.

Thanks very much.

Revision history for this message
Salvatore Orlando (salvatore-orlando) wrote :

The hibrid vif driver is a rather well known performance bottleneck.
For a deployment which does not need the iptables-based security group implementation, the hybrid driver should be avoided.

From my point of view I see two possible ways out.
One would be finding a way for allowing iptables rules to work correctly in a pure ovs environment. Today Openvswitch is not able to do so, as iptables rules are not enforced on iptables port. This might happen in the future, but is probably beyond neutron's (and openstack_ scope.
The other solution would be a pure-ovs security groups driver, were rules are applied as OVS flows; this is feasible, even if probably not easy; hopefully such a driver will be provided during the Icehouse timeframe.

Revision history for this message
Simon (xchenum) wrote :

If we avoid the hybrid driver, what are the available mechanisms for enforcing security groups?

Changed in neutron:
status: New → Confirmed
assignee: nobody → yong sheng gong (gongysh)
Revision history for this message
yong sheng gong (gongysh) wrote :

I also tested and found the reason is the veth path between linux bridge and ovs bridge.

affects: neutron → nova-project
Revision history for this message
yong sheng gong (gongysh) wrote :

https://review.openstack.org/#/c/46911/

before this patch, the bandwidth iperf -s is:
2.25 Gbits/sec
after it, it is:
19.8 Gbits/sec

Revision history for this message
yong sheng gong (gongysh) wrote :

Robin Wang, can u have a test with my patch?
Thanks
Yong Sheng Gong

Revision history for this message
Robin Wang (robinwang) wrote :

Thanks Yong Sheng for the great help and working on the patch. We'll test it and update result soon.
Best,
Robin

Revision history for this message
Kevin Bringard (kbringard) wrote :

I'm running grizzly with OVS 1.10... so I had to make some changes to the patches to get them to work (I've attached them in one file), but I don't seem to see any improvement...

Prior to patching I have the following on my integration bridge:

f9b838db-e402-47f0-923d-81ad66f3cf7f
    Bridge br-int
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "qvo4912548e-43"
            tag: 1
            Interface "qvo4912548e-43"
        Port br-int
            Interface br-int
                type: internal
        Port "qvod855ae25-7d"
            tag: 1
            Interface "qvod855ae25-7d"

After patching (and rebooting to make sure everything got cleaned up, I have the following:

f9b838db-e402-47f0-923d-81ad66f3cf7f
    Bridge br-int
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "qvm4912548e-43"
            tag: 1
            Interface "qvm4912548e-43"
                type: internal
        Port br-int
            Interface br-int
                type: internal
        Port "qvmd855ae25-7d"
            tag: 1
            Interface "qvmd855ae25-7d"
                type: internal

brctl show has the following before:

bridge name bridge id STP enabled interfaces
qbr4912548e-43 8000.b60b11b69407 no qvb4912548e-43
       tap4912548e-43
qbrd855ae25-7d 8000.5aaf731c719e no qvbd855ae25-7d
       tapd855ae25-7d

and after:

bridge name bridge id STP enabled interfaces
qbr4912548e-43 8000.b60b11b69407 no qvm4912548e-43
       tap4912548e-43
qbrd855ae25-7d 8000.5aaf731c719e no qvmd855ae25-7d
       tapd855ae25-7d

So we can see that the veth pair is in fact eliminated. My VMs boot and can talk to each other, but my speed has not improved. VM to VM on the same host is approximately 1.5Gbit/s, both before and after:

[SUM] 0.0-30.0 sec 5.14 GBytes 1.47 Gbits/sec

Any thoughts, or perhaps something I've missed?

Revision history for this message
Simon (xchenum) wrote : Re: [Bug 1223267] Re: ovs plugin performance issue
Download full text (5.0 KiB)

I haven't got to the stage to apply the patch yet.. Still working on some
server build issues here. Will try it and let you know. Can't spot anything
obviously wrong :-(

On Thu, Sep 19, 2013 at 4:30 PM, Kevin Bringard
<email address hidden>wrote:

> I'm running grizzly with OVS 1.10... so I had to make some changes to
> the patches to get them to work (I've attached them in one file), but I
> don't seem to see any improvement...
>
> Prior to patching I have the following on my integration bridge:
>
> f9b838db-e402-47f0-923d-81ad66f3cf7f
> Bridge br-int
> Port patch-tun
> Interface patch-tun
> type: patch
> options: {peer=patch-int}
> Port "qvo4912548e-43"
> tag: 1
> Interface "qvo4912548e-43"
> Port br-int
> Interface br-int
> type: internal
> Port "qvod855ae25-7d"
> tag: 1
> Interface "qvod855ae25-7d"
>
>
> After patching (and rebooting to make sure everything got cleaned up, I
> have the following:
>
>
> f9b838db-e402-47f0-923d-81ad66f3cf7f
> Bridge br-int
> Port patch-tun
> Interface patch-tun
> type: patch
> options: {peer=patch-int}
> Port "qvm4912548e-43"
> tag: 1
> Interface "qvm4912548e-43"
> type: internal
> Port br-int
> Interface br-int
> type: internal
> Port "qvmd855ae25-7d"
> tag: 1
> Interface "qvmd855ae25-7d"
> type: internal
>
> brctl show has the following before:
>
> bridge name bridge id STP enabled interfaces
> qbr4912548e-43 8000.b60b11b69407 no
> qvb4912548e-43
> tap4912548e-43
> qbrd855ae25-7d 8000.5aaf731c719e no
> qvbd855ae25-7d
> tapd855ae25-7d
>
> and after:
>
> bridge name bridge id STP enabled interfaces
> qbr4912548e-43 8000.b60b11b69407 no
> qvm4912548e-43
> tap4912548e-43
> qbrd855ae25-7d 8000.5aaf731c719e no
> qvmd855ae25-7d
> tapd855ae25-7d
>
> So we can see that the veth pair is in fact eliminated. My VMs boot and
> can talk to each other, but my speed has not improved. VM to VM on the
> same host is approximately 1.5Gbit/s, both before and after:
>
> [SUM] 0.0-30.0 sec 5.14 GBytes 1.47 Gbits/sec
>
> Any thoughts, or perhaps something I've missed?
>
> ** Patch added: "bug-1223267.patch"
>
> https://bugs.launchpad.net/nova-project/+bug/1223267/+attachment/3830251/+files/bug-1223267.patch
>
> --
> You received this bug notification because you are subscribed to the bug
> report.
> https://bugs.launchpad.net/bugs/1223267
>
> Title:
> ovs plugin performance issue
>
> Status in The Nova Project:
> Confirmed
>
> Bug description:
> To enable security groups which is based on iptables, we configured
> hybrid vif-driver when using ovs...

Read more...

Revision history for this message
Robin Wang (robinwang) wrote :

Hi Yong Sheng,

Seems it works! Thanks very much for your work!

I verified it in a Grizzly setup, which is deployed in virtual environment. (Each Compute Node is a virtual machine).

With patch, bandwidth increases to 6.38 Gb/s.
As a contrast, in the same environment, on another compute node, bandwidth is still around 2 Gb/s without patch.

I think virtual deployment might be the reason I didn't get a higher bandwidth, as in your test. Will try it on physical servers to get further proof.

Best,
Robin

Revision history for this message
Jian Wen (wenjianhn) wrote :

Maybe this is a duplicate of bug 1201869.

Which version of kernel are you using?
If you are using Ubuntu 12.04, could you test kernel 3.5.0-41.64~precise1
that has been released a few hours ago?
If not, could you test kernel 3.9.9?

I don't think using OVS patch port is a long term solution, because of bug 1045613.

Revision history for this message
Simon (xchenum) wrote :
Download full text (5.3 KiB)

OK, I applied the patch and ran OVS 1.11.0 on Debian 7 with 3.2.0 kernel.

Before the patch, iperf between two VMs would peak at 1Gbps, and doing "-P
10" would essentially kill the throughput - dropping to something like
20Mbps in aggregate. After applying the patch, I see consistent 10Gbps
between two VMs. Doing "-P 30" doesn't kill the performance, but it doesn't
improve on it either.

Yet, I don't seem to be able to get close to 20Gbps performance like Yong
Sheng Gong...

On Thu, Sep 19, 2013 at 4:30 PM, Kevin Bringard
<email address hidden>wrote:

> I'm running grizzly with OVS 1.10... so I had to make some changes to
> the patches to get them to work (I've attached them in one file), but I
> don't seem to see any improvement...
>
> Prior to patching I have the following on my integration bridge:
>
> f9b838db-e402-47f0-923d-81ad66f3cf7f
> Bridge br-int
> Port patch-tun
> Interface patch-tun
> type: patch
> options: {peer=patch-int}
> Port "qvo4912548e-43"
> tag: 1
> Interface "qvo4912548e-43"
> Port br-int
> Interface br-int
> type: internal
> Port "qvod855ae25-7d"
> tag: 1
> Interface "qvod855ae25-7d"
>
>
> After patching (and rebooting to make sure everything got cleaned up, I
> have the following:
>
>
> f9b838db-e402-47f0-923d-81ad66f3cf7f
> Bridge br-int
> Port patch-tun
> Interface patch-tun
> type: patch
> options: {peer=patch-int}
> Port "qvm4912548e-43"
> tag: 1
> Interface "qvm4912548e-43"
> type: internal
> Port br-int
> Interface br-int
> type: internal
> Port "qvmd855ae25-7d"
> tag: 1
> Interface "qvmd855ae25-7d"
> type: internal
>
> brctl show has the following before:
>
> bridge name bridge id STP enabled interfaces
> qbr4912548e-43 8000.b60b11b69407 no
> qvb4912548e-43
> tap4912548e-43
> qbrd855ae25-7d 8000.5aaf731c719e no
> qvbd855ae25-7d
> tapd855ae25-7d
>
> and after:
>
> bridge name bridge id STP enabled interfaces
> qbr4912548e-43 8000.b60b11b69407 no
> qvm4912548e-43
> tap4912548e-43
> qbrd855ae25-7d 8000.5aaf731c719e no
> qvmd855ae25-7d
> tapd855ae25-7d
>
> So we can see that the veth pair is in fact eliminated. My VMs boot and
> can talk to each other, but my speed has not improved. VM to VM on the
> same host is approximately 1.5Gbit/s, both before and after:
>
> [SUM] 0.0-30.0 sec 5.14 GBytes 1.47 Gbits/sec
>
> Any thoughts, or perhaps something I've missed?
>
> ** Patch added: "bug-1223267.patch"
>
> https://bugs.launchpad.net/nova-project/+bug/1223267/+attachment/3830251/+files/bug-1223267.patch
>
> --
> You received this bug notification ...

Read more...

Revision history for this message
li,chen (chen-li) wrote :

I'm working under CentOS 2.6.32-358.114.1.openstack.el6.x86_64 + Grizzly, not find this issue.
I believe it is a Ubuntu bug rather than Nova.

Thanks.
-chen

Revision history for this message
yong sheng gong (gongysh) wrote :

you can see my test record http://www.ustack.com/wp-content/uploads/2013/10/Neutron-performance-testing.pdf
under ubuntu 13.04

While some people believe this is a Ubuntu kernel bug rather than Openstack.
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1201869

Revision history for this message
Francois Eleouet (fanchon) wrote :

These performance issues may come from the late infroduction of TCP offloading support in veth interfaces, which land in 3.11 kernels:

https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/drivers/net/veth.c?id=8093315a91340bca52549044975d8c7f673b28a1

If this features aren't available, single host performance may be largely impacted, as GRO/GSO won't be supported in veth, resulting in large TCP segment being fragmentation while they're crossing veth pair. Performance impact is important as vswitch has to formward 1,5kb packets instead of 64kb segments...

On debian 3.12 kernel, I got almost 10x iperf factor while testing these features:

  ip netns add ns1
  ip netns add ns2

  ip link add br-test type bridge
  ip link add veth-1 type veth peer name veth-ns1
  ip link add veth-2 type veth peer name veth-ns2

  ip link set veth-ns1 netns ns1
  ip link set veth-ns2 netns ns2

  ip link set veth-1 master br-test
  ip link set veth-2 master br-test

  ip link set br-test up
  ip link set veth-1 up
  ip link set veth-2 up

  ip netns exec ns1 ip addr add 192.168.1.1/24 dev veth-ns1
  ip netns exec ns1 ip link set veth-ns1 up
  ip netns exec ns1 ip link set lo up

  ip netns exec ns2 ip addr add 192.168.1.2/24 dev veth-ns2
  ip netns exec ns2 ip link set veth-ns2 up
  ip netns exec ns2 ip link set lo up

  ip netns exec ns2 iperf -s &
  ip netns exec ns1 iperf -c 192.168.1.2

=> results in 23 Gbits/sec on my computer

If I turn offloading features off, as per pre-3.11 kernels:

ip netns exec ns1 ethtool -K veth-ns1 gso off gro off tso off
ip netns exec ns2 ethtool -K veth-ns2 gso off gro off tso off

=> Throughput drops down to 2.66 Gbits/sec

Btw, offloding chain breakage by veth may significantly impact flat/vlan network performance, as packets won't be checksummed and segmented by the nic any more, but it probably won't improve tunelled trafic as tunnel interfaces don't neither support offloading features on "old" kernels.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.