e1000 takes a long time (2 seconds) to set link ready

Bug #1770724 reported by Ihar Hrachyshka
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
QEMU
Expired
Undecided
Unassigned

Bug Description

When a VM is booted with e1000 NIC, it takes a long time (2 seconds) for the guest to bring up the link. This can be seen in the following dmesg messages:

[ 4.899773] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 6.889165] e1000: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX
[ 6.891372] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready

The first message happens when the guest calls to ifup eth0; ifup does not hold control until the link is established. The guest I am using (cirros 0.4.0) then starts udhcpc DHCP client that issues a DHCP request, then waits for 60 seconds for reply, then repeats the DHCP request. When the first request is sent, the link is not ready yet, so the frame is lost; when the second request is sent, the link is up and DHCP lease is received.

If I use different NICs (e1000e, virtio, rtl*), there are no dmesg messages, and the very first DHCP request correctly reaches outside and results in a lease acquired.

The qemu version I am using is 2.10.1 from Fedora 27. I tried to reproduce with runtime from Fedora 29 that includes 2.12 but I have different issues there that block me from reproducing the original issue (there, I get kernel traces, irq interrupt errors, and no network link at all).

For the record, the qemu in question is started by kubevirt inside a docker container with Fedora 27 based image.

===============================

The command line of qemu is as follows:

27404 ? Sl 0:10 /usr/bin/qemu-system-x86_64 -machine accel=kvm -name guest=default_ovm-cirros,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-default_ovm-cirros/master-key.aes -machine pc-q35-2.10,accel=kvm,usb=off,dump-guest-core=off -m 62 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 8769fdbe-d957-5567-bd71-114ba0eb4811 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-1-default_ovm-cirros/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -no-acpi -boot strict=on -device i82801b11-bridge,id=pci.1,bus=pcie.0,addr=0x1e -device pci-bridge,chassis_nr=2,id=pci.2,bus=pci.1,addr=0x0 -device pcie-root-port,port=0x10,chassis=3,id=pci.3,bus=pcie.0,multifunction=on,addr=0x2 -device pcie-root-port,port=0x11,chassis=4,id=pci.4,bus=pcie.0,addr=0x2.0x1 -device pcie-root-port,port=0x12,chassis=5,id=pci.5,bus=pcie.0,addr=0x2.0x2 -device pcie-root-port,port=0x13,chassis=6,id=pci.6,bus=pcie.0,addr=0x2.0x3 -device pcie-root-port,port=0x14,chassis=7,id=pci.7,bus=pcie.0,addr=0x2.0x4 -device nec-usb-xhci,id=usb,bus=pci.3,addr=0x0 -drive file=/var/run/libvirt/kubevirt-ephemeral-disk/registry-disk-data/default/ovm-cirros/disk_registryvolume/disk-image.raw,format=raw,if=none,id=drive-virtio-disk0 -device virtio-blk-pci,scsi=off,bus=pci.4,addr=0x0,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/var/run/libvirt/kubevirt-ephemeral-disk/cloud-init-data/default/ovm-cirros/noCloud.iso,format=raw,if=none,id=drive-virtio-disk1 -device virtio-blk-pci,scsi=off,bus=pci.5,addr=0x0,drive=drive-virtio-disk1,id=virtio-disk1 -netdev tap,fd=23,id=hostnet0 -device e1000,netdev=hostnet0,id=net0,mac=0a:58:0a:f4:01:e1,bus=pci.2,addr=0x1 -chardev socket,id=charserial0,path=/var/run/kubevirt-private/default/ovm-cirros/virt-serial0,server,nowait -device isa-serial,chardev=charserial0,id=serial0 -vnc vnc=unix:/var/run/kubevirt-private/default/ovm-cirros/virt-vnc -device VGA,id=video0,vgamem_mb=16,bus=pcie.0,addr=0x1 -device virtio-balloon-pci,id=balloon0,bus=pci.6,addr=0x0 -msg timestamp=on

============================

Hypervisor versions of interest:

[vagrant@node02 ~]$ sudo docker exec -it $(sudo docker ps | grep virt-launcher-ovm-cirros | grep entrypoint | awk -e '{print $1}') rpm -qa | grep 'qemu\|libvirt'
qemu-block-curl-2.10.1-2.fc27.x86_64
qemu-block-ssh-2.10.1-2.fc27.x86_64
qemu-block-nfs-2.10.1-2.fc27.x86_64
qemu-system-x86-core-2.10.1-2.fc27.x86_64
libvirt-daemon-3.7.0-4.fc27.x86_64
libvirt-daemon-driver-storage-disk-3.7.0-4.fc27.x86_64
libvirt-daemon-driver-storage-mpath-3.7.0-4.fc27.x86_64
libvirt-daemon-driver-storage-zfs-3.7.0-4.fc27.x86_64
libvirt-daemon-driver-nwfilter-3.7.0-4.fc27.x86_64
qemu-img-2.10.1-2.fc27.x86_64
qemu-common-2.10.1-2.fc27.x86_64
qemu-block-dmg-2.10.1-2.fc27.x86_64
qemu-block-rbd-2.10.1-2.fc27.x86_64
qemu-system-x86-2.10.1-2.fc27.x86_64
libvirt-libs-3.7.0-4.fc27.x86_64
libvirt-daemon-driver-storage-core-3.7.0-4.fc27.x86_64
libvirt-daemon-driver-qemu-3.7.0-4.fc27.x86_64
libvirt-daemon-driver-storage-gluster-3.7.0-4.fc27.x86_64
libvirt-daemon-driver-storage-logical-3.7.0-4.fc27.x86_64
libvirt-daemon-driver-storage-rbd-3.7.0-4.fc27.x86_64
libvirt-daemon-driver-storage-sheepdog-3.7.0-4.fc27.x86_64
libvirt-daemon-driver-storage-3.7.0-4.fc27.x86_64
libvirt-daemon-driver-nodedev-3.7.0-4.fc27.x86_64
libvirt-daemon-driver-secret-3.7.0-4.fc27.x86_64
libvirt-client-3.7.0-4.fc27.x86_64
ipxe-roms-qemu-20161108-2.gitb991c67.fc26.noarch
qemu-block-gluster-2.10.1-2.fc27.x86_64
qemu-block-iscsi-2.10.1-2.fc27.x86_64
qemu-kvm-2.10.1-2.fc27.x86_64
libvirt-daemon-driver-network-3.7.0-4.fc27.x86_64
libvirt-daemon-driver-storage-iscsi-3.7.0-4.fc27.x86_64
libvirt-daemon-driver-storage-scsi-3.7.0-4.fc27.x86_64
libvirt-daemon-driver-interface-3.7.0-4.fc27.x86_64
libvirt-daemon-kvm-3.7.0-4.fc27.x86_64

[vagrant@node02 ~]$ uname -a
Linux node02 3.10.0-693.17.1.el7.x86_64 #1 SMP Thu Jan 25 20:13:58 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

[vagrant@node02 ~]$ cat /etc/redhat-release
CentOS Linux release 7.4.1708 (Core)

=============================

Guest:

$ uname -a
Linux ovm-cirros 4.4.0-28-generic #47-Ubuntu SMP Fri Jun 24 10:09:13 UTC 2016 x86_64 GNU/Linux

=============================

Bug trackers for other projects:
- cirros: https://bugs.launchpad.net/cirros/+bug/1768955
- kubevirt: https://github.com/kubevirt/kubevirt/issues/936

description: updated
Revision history for this message
Thomas Huth (th-huth) wrote :

The QEMU project is currently considering to move its bug tracking to another system. For this we need to know which bugs are still valid and which could be closed already. Thus we are setting older bugs to "Incomplete" now.
If you still think this bug report here is valid, then please switch the state back to "New" within the next 60 days, otherwise this report will be marked as "Expired". Or mark it as "Fix Released" if the problem has been solved with a newer version of QEMU already. Thank you and sorry for the inconvenience.

Changed in qemu:
status: New → Incomplete
Revision history for this message
Launchpad Janitor (janitor) wrote :

[Expired for QEMU because there has been no activity for 60 days.]

Changed in qemu:
status: Incomplete → Expired
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.