bad network performance with 10Gbit

Bug #602336 reported by zerocoolx
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
QEMU
Expired
Undecided
Unassigned

Bug Description

Hello,
I have trouble with the network performance inside my virtual machines. I don't know if this is realy a bug, but I didn't find a solution for this problem in other forums or maillists.

My KVM-Host machine is connected to a 10Gbit Network. All interfaces are configured to a mtu of 4132. On this host I have no problems and I can use the full bandwidth:

CPU_Info:
2x Intel Xeon X5570
flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 sse4_2 popcnt lahf_lm ida tpr_shadow vnmi flexpriority ept vpid

KVM Version:
QEMU PC emulator version 0.12.3 (qemu-kvm-0.12.3), Copyright (c) 2003-2008 Fabrice Bellard
0.12.3+noroms-0ubuntu9

KVM Host Kernel:
2.6.32-22-server #36-Ubuntu SMP Thu Jun 3 20:38:33 UTC 2010 x86_64 GNU/Linux

KVM Host OS:
Ubuntu 10.04 LTS
Codename: lucid

KVM Guest Kernel:
2.6.32-22-server #36-Ubuntu SMP Thu Jun 3 20:38:33 UTC 2010 x86_64 GNU/Linux

KVM Guest OS:
Ubuntu 10.04 LTS
Codename: lucid

# iperf -c 10.10.80.100 -w 65536 -p 12345 -t 60 -P4
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-60.0 sec 18.8 GBytes 2.69 Gbits/sec
[ 5] 0.0-60.0 sec 15.0 GBytes 2.14 Gbits/sec
[ 6] 0.0-60.0 sec 19.3 GBytes 2.76 Gbits/sec
[ 3] 0.0-60.0 sec 15.1 GBytes 2.16 Gbits/sec
[SUM] 0.0-60.0 sec 68.1 GBytes 9.75 Gbits/sec

Inside a virtual machine don't reach this result:

# iperf -c 10.10.80.100 -w 65536 -p 12345 -t 60 -P 4
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-60.0 sec 5.65 GBytes 808 Mbits/sec
[ 4] 0.0-60.0 sec 5.52 GBytes 790 Mbits/sec
[ 5] 0.0-60.0 sec 5.66 GBytes 811 Mbits/sec
[ 6] 0.0-60.0 sec 5.70 GBytes 816 Mbits/sec
[SUM] 0.0-60.0 sec 22.5 GBytes 3.23 Gbits/sec

I only can use 3,23Gbits of 10Gbits. I use the virtio driver for all of my vms, but I have also tried to use the e1000 nic device instead.

With starting the iperf performance test on multiple vms simultaneously I can use the full bandwidth of the kvm host's interface. But only one vm can't use the full bandwith. Is this a known limitation, or can I improve this performance?

Does anyone have an idea how I can improve my network performance? It's very important, because I want to use the network interface to boot all vms via AOE (ATA over Ethernet).

If I mount a harddisk via AOE inside a vm I get only this results:
Write |CPU |Rewrite |CPU |Read |CPU
102440 |10 |51343 |5 |104249 |3

On the KVM Host I get those results on a mouted AOE Device:
Write |CPU |Rewrite |CPU |Read |CPU
205597 |19 |139118 |11 |391316 |11

If I mount the AOE Device directly on the kvm-host and put a virtual harddisk-file in it I got the following results inside a vm using this harddisk-file:
Write |CPU |Rewrite |CPU |Read |CPU
175140 |12 |136113 |24 |599989 |29

I have just tested vhost_net, but without success.
I have upgraded my kernel to 2.6.35-6 with vhost_net support and have
installed the qemu-kvm version from
git://git.kernel.org/pub/scm/linux/kernel/git/mst/qemu-kvm.git (0.12.50)
But I still have the same results as before.

I had already posted my problem into a few forums, but still got no
reply.

I would feel very happy if someone can help me.

best regards

Revision history for this message
Ken Sharp (kennybobs) wrote :

Have you tried compiling the latest upstream version to see if this is still an issue?

Changed in qemu:
status: New → Incomplete
Revision history for this message
zerocoolx (rene-scholtevanmast) wrote :

At the moment i'm using version qemu 0.12.3+noroms-0ubuntu9.18 of my ubuntu distribution. I'm triing to compile the latest upstream version during the next two weeks to verify if this is still an issue.

Revision history for this message
john fisher (john-jpfisher) wrote :

Depending on your networking hardware, you may need to use a virtual function to access the 10G from a guest. On mine, failing to set up the xml file correctly resulted in a NAT connection or a bridge connection instead of the full-speed connection.

This requires libvirt 1.0x BTW.

the xml that works for me is:

<interface type='hostdev'>
      <mac address='52:54:00:c7:c3:91'/>
      <source>
        <address type='pci' domain='0x0000' bus='0x02' slot='0x10' function='0x0'/>
      </source>
      <target dev='macvtap1'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
    </interface>

check your lspci output to get the actual bus numbers.

Revision history for this message
Michael liu (ztehypervisor) wrote :

You can do some performance optimization , such as isolating cpus, closing selinux, closing nmi_watchdog, disable intel_pstate and so on. You can try such grub cmdline:
nmi_watchdog=0 selinux=0 intel_pstate=disable nosoftlockup isolcpus=4,5,6,7 nohz_full=4,5,6,7

Revision history for this message
Launchpad Janitor (janitor) wrote :

[Expired for QEMU because there has been no activity for 60 days.]

Changed in qemu:
status: Incomplete → Expired
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.