Comment 10 for bug 1079212

Revision history for this message
ITec (itec) wrote :

Yes, it seems a lot like an iperf bug.
I did the same test with uperf and got full bandwith. Even bonding works well:

 ubuntu5# bmon
  # Interface RX Rate RX # TX Rate TX #
-------------------------------------------------------------------
ubuntu5 (source: local)
  0 br2 39.00B 0 377.00B 0
  6 bond2 216.04MiB 213299 213.28MiB 179034
  12 vnet0 106.83MiB 35932 107.96MiB 51115
  15 eth1 103.92MiB 103566 102.63MiB 85928
  17 eth2 112.11MiB 109715 110.58MiB 93057
  22 vnet1 99.18MiB 32968 100.49MiB 49117

What I do not know is, why this occurs only with iperf on virtual machines talking to another iperf instance on the network.