Comment 0 for bug 1698454

Revision history for this message
Brandon Bates (brandonbates) wrote :

I'm having a severe TCP receive problem with a new Ubuntu 16 server and an Intel 82599ES 10-Gigabit SFP+ (X520 series card) when a windows 10 machine is used to send the data. The receive performance is 1Gb to 2Gb steady using iperf single stream, while send is 9.42Gb.

Here is what I have found:

-Running 8 parallel streams in iperf gets me the full 9.42 in aggregate

-Tried a few windows 10 machines with repeatable same results

-Same results through switch or direct

-Linux to linux no problems

-Tested with clean installs of 16.04.02 and 4.4.0-66 kernel, latest the upgrade will give me, also tried 17.04 with up to 4.10.0-20-generic same problem, tried 4.11.0-041100rc8-generic and problem seemed to go away for a bit but came back so I think that might be a red herring (see interesting note below).

-14.04 and 15.10 with up to 4.2.8-040208-generic is 9.42/9.42 (works fine, couldn't get 4.3 to install on 15.10)

-14.04 with the latest 5.0.4 ixgbe driver still works fine, does not seem to be a driver version issue.

-Tried swapping out cards with another same model/chipset with same exact result

-Large receive offload increases the performance from a steady 1Gb to a steady 2.14Gb

-Disabling sack under got me 75% of the way back to full speed, but it was very unstable (didn't hold at a solid speed)

-Using a different brand of card in this same server (but not the same slot) (mellanox infiniband running in ethernet mode) is 9.42/8.5 (seems to work fine, 9.42 to 8.5 is windows machine limit I think)

-Interesting: When swapping between intel and mellanox 10Gbe cards (with a reboot of the server inbetween, but not a reboot of the windows machine, and keeping the same IP on the server) the performance does not change immediately. When going from intel to mellanox the first test holds around 1 or 2Gbit, then after that it jumps up to 8.5 steady. Similarly when switching from mellanox to Intel the first 1 or 2 seconds of performance hits 8.x then drop in half or more and within 3 or 4 seconds it is back to 1Gbit and stays there for each subsequent iperf test.

-Interesting: When capturing packets on wireshark on windows, performance comes up to 8.5! (No, ditching windows isn't an option here unfortunately. :) So obviously something in the way the two tcp stacks are interacting without a buffer inbetween when the intel driver is online is causing issues.

-Port bonding makes no difference (nor should it for single stream)

-Tried rolling the intel driver back to the 3.2.1 version that is on 14.04 but it was too old to compile

-I suspect this is a kernel TCP/IP implementation change somewhere between 4.2 and 4.4 that has caused it to not play nice with window's stack. Based on the delayed performance change I'm thinking something is messing with flow control, the tcp congestion algorithm, or the tcp window. I tried turning some various tcp options off, tweaking some values, changing congestion algorithms, hardware flow control, and comparing sysctl stuff from the u14 machine to this machine to no avail.