Networking Dies Under Heavy Load

Bug #194304 reported by Wolfgang Glas
2
Affects Status Importance Assigned to Milestone
KVM
Unknown
Unknown
kvm (Ubuntu)
Fix Released
High
Dustin Kirkland 

Bug Description

I run into the same problem as described under

http://sourceforge.net/tracker/index.php?func=detail&aid=1802082&group_id=180599&atid=893831

on an QuadCore Intel(R) Xeon(R) CPU X3220 @ 2.40GHz with 8GB and
hardy-alpha4 with all update as of 2008-02-16 and a gutsy guest using bridged networking.

  What can I do to shed more light on this issue ?
Or may the Ubuntu virtualization team try to reproduce the probem ?

   TIA for any help, Wolfgang

Revision history for this message
Soren Hansen (soren) wrote : Re: [Bug 194304] [NEW] Networking Dies Under Heavy Load

On Fri, Feb 22, 2008 at 09:12:13AM -0000, Wolfgang Glas wrote:
> I run into the same problem as described under
>
> http://sourceforge.net/tracker/index.php?func=detail&aid=1802082&group_id=180599&atid=893831
>
> on an QuadCore Intel(R) Xeon(R) CPU X3220 @ 2.40GHz with 8GB and
> hardy-alpha4 with all update as of 2008-02-16 and a gutsy guest using bridged networking.

How are you running kvm? Are you using libvirt or kvm directly from a
command line? If the former, can you send me the XML definiton of the
domain, and if the latter, can you send me the exact command line?

--
Soren Hansen
Virtualisation specialist
Ubuntu Server Team
http://www.ubuntu.com/

Revision history for this message
Wolfgang Glas (wglas) wrote :

Thanks for your reply, I've attached my libvirt description of the machine. This is my only virtual domain, which has a number of 4 CPUs assigned. All other domains (2 in my case) have only a single CPU an are (not yet) not affected by this problem.

Revision history for this message
Wolfgang Glas (wglas) wrote :

FYI, I've just tried to oversome this problem by reducing the number of assigned CPUs from 4 to 3 for the virtual machine, that loses it's network connectivity under load. (This is a quad core machine...) However, reducing the number of processors does not solve the problem.

Revision history for this message
Wolfgang Glas (wglas) wrote :

OK, after rying with 3 CPU instead of 4 assigned, it seems, that assigning 3 CPUs had a negative impact on the problem. It seems, that the higher is the likelihood for hitting 100% CPU coverage of the guest the higher is the likelihood for network timeouts. And with 4CPUs I have more chances, that the guest does not use 100% CPU time.

Revision history for this message
Wolfgang Glas (wglas) wrote :

Further experiments based on suggestions of the sourceforge mailing list revealed, that using the noapic kernel boot option inside the guest slightly improves the situation, but my network apdater still goes down, if the CPU load reaches 100% (all cores used) for several miuntes...

Changed in kvm:
assignee: nobody → kirkland
importance: Undecided → High
Revision history for this message
Dustin Kirkland  (kirkland) wrote :

Hello,

Can you still reproduce this problem on Hardy?

What about on Jaunty?

I'm looking at a number of networking issues with KVM, and I'd like to see if this particular one is still reproducible...

:-Dustin

Changed in kvm:
status: New → Incomplete
Revision history for this message
Wolfgang Glas (wglas) wrote :

Sorry Dustin,

  I gave up with kvm right after I got no response to this bug report for two weeks or so.

  Honestly, I got the impression that compared with openvz or xen kvm is orders of magnitude away from server grade stability. For the desktop part, qemu is a very old and outdated hardware emulation layer, so there's much work to do on this side, too.

   However, I wish you good luck in debugging kvm.

       Best regards,

          Wolfgang

Revision history for this message
Dustin Kirkland  (kirkland) wrote :

Okay, I believed that I have fixed this in Jaunty. Please reopen if you (or anyone else) can reproduce this issue.

Thanks,
:-Dustin

Changed in kvm:
status: Incomplete → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.