Oh, ok. It does work quite well on my local guests that come up with 1500 MTU. Maybe the EC2 guests would need a bigger data size value than 1000. But yeah, as long as I have some way to verify whatever comes up to fix this, it is ok.
Yes, the loss of jumbo frames was expected. As long as high throughput is not critical it is at least good enough as a work-around.
Basically it looks like the problem was kind of known but probably did not happens often enough. Or actually complicated to fix. It appears that other drivers will not have that issue as long as the limit is in the actual transfer size and not in the number of pages required to accommodate the frags/scatter gather list. Unfortunately Xen has a limit there that guests have to impose because otherwise the host side driver would shut down the connection completely.
Oh, ok. It does work quite well on my local guests that come up with 1500 MTU. Maybe the EC2 guests would need a bigger data size value than 1000. But yeah, as long as I have some way to verify whatever comes up to fix this, it is ok.
Yes, the loss of jumbo frames was expected. As long as high throughput is not critical it is at least good enough as a work-around.
About a upstream discussion: http:// www.spinics. net/lists/ netdev/ msg282340. html
Basically it looks like the problem was kind of known but probably did not happens often enough. Or actually complicated to fix. It appears that other drivers will not have that issue as long as the limit is in the actual transfer size and not in the number of pages required to accommodate the frags/scatter gather list. Unfortunately Xen has a limit there that guests have to impose because otherwise the host side driver would shut down the connection completely.