Hi Rodrigo, there is a possibility that the problem is not a regression in handling GRO but actually supporting GRO. I can find the following commit in 3.13-rc1:
Now I tried to reproduce the performance issue locally and using a Xen-4.4.1 Trusty host which runs a Trusty PV guest. On the guest side I start iperf in server mode (since that is the receive side, but to be sure I reversed the setup with the same results) and on a desktop running Trusty, I start iperf in client mode connecting to the PV guest. The desktop and host have 1Gbit NICs. With that I get an average of about 850Mbit/sec over 10 runs. Which is as good as I would expect it. And this does not change significantly whether I enable or disable GRO.
Now what we do not know is what gets involved network-wise between your server and the guests. Not really sure how this would possibly happen.
Hi Rodrigo, there is a possibility that the problem is not a regression in handling GRO but actually supporting GRO. I can find the following commit in 3.13-rc1:
commit 99d3d587b2b4314 ccc8ea066cb327d fb523d598e
Author: Wei Liu <email address hidden>
Date: Mon Sep 30 13:46:34 2013 +0100
xen-netfront: convert to GRO API
Now I tried to reproduce the performance issue locally and using a Xen-4.4.1 Trusty host which runs a Trusty PV guest. On the guest side I start iperf in server mode (since that is the receive side, but to be sure I reversed the setup with the same results) and on a desktop running Trusty, I start iperf in client mode connecting to the PV guest. The desktop and host have 1Gbit NICs. With that I get an average of about 850Mbit/sec over 10 runs. Which is as good as I would expect it. And this does not change significantly whether I enable or disable GRO.
Now what we do not know is what gets involved network-wise between your server and the guests. Not really sure how this would possibly happen.