No, the vhost_net module is not loaded.
Loading it does not help.
No, 'ifdown eth0; ifup eth0' in the guest does not reliably bring networking back up. Sometimes it works, sometimes I have to reboot the VM. The last times it did not work any more.
My problem might be the same as bug 997978.
But I only have it in conjunction with bridged bonding and I only have seen the effect after heavy load and not after time.
Maybe it could occur after a longer period of time on my system, too.
qemu-kvm in ppa:ubuntu-virt/backports and ppa:ubuntu-virt/kvm-network-hang both work well, but I did no long time testing. It still could occur after some time.
Now I am running my iperf-test with the kvm-network-hang version over some hours, but I cannot test it infinitely long.
What are the differences between the official version of qemu-kvm and the one in kvm-network-hang?
I really need to know if it is reliable in order to come to a decision, wether to use it or not in production systems!
No, the vhost_net module is not loaded.
Loading it does not help.
No, 'ifdown eth0; ifup eth0' in the guest does not reliably bring networking back up. Sometimes it works, sometimes I have to reboot the VM. The last times it did not work any more.
My problem might be the same as bug 997978.
But I only have it in conjunction with bridged bonding and I only have seen the effect after heavy load and not after time.
Maybe it could occur after a longer period of time on my system, too.
qemu-kvm in ppa:ubuntu- virt/backports and ppa:ubuntu- virt/kvm- network- hang both work well, but I did no long time testing. It still could occur after some time.
Now I am running my iperf-test with the kvm-network-hang version over some hours, but I cannot test it infinitely long.
What are the differences between the official version of qemu-kvm and the one in kvm-network-hang?
I really need to know if it is reliable in order to come to a decision, wether to use it or not in production systems!