This occurred during the deployment of the DPDK compute. The DPDK was not deployed. The bond0 and bond1 configuration appears to have failed.
-----------8<-----------
Preliminary analysis: the LRO appears to be off after the kernel traces, despite the message. I am inquiring as to the nature of the failure.
[ 27.423698] bonding: bond0: link status definitely up for interface enp131s0f1, 10000 Mbps full duplex.
[ 27.423704] bonding: bond0: link status definitely up for interface enp3s0f0, 10000 Mbps full duplex.
[ 27.423711] bonding: bond1: link status definitely up for interface enp3s0f1, 10000 Mbps full duplex.
[ 27.423716] bonding: bond1: link status definitely up for interface enp131s0f0, 10000 Mbps full duplex.
[ 27.523777] bonding: bond0: link status down for interface enp131s0f1, disabling it in 1000 ms.
[ 28.916892] ixgbe 0000:83:00.1 enp131s0f1: NIC Link is Up 10 Gbps, Flow Control: RX/TX
[ 28.920854] bonding: bond0: link status up again after 1000 ms for interface enp131s0f1.
[ 29.020935] bonding: bond0: link status down for interface enp3s0f0, disabling it in 1000 ms.
[ 29.401337] ixgbe 0000:03:00.0 enp3s0f0: detected SFP+: 3
[ 29.541388] ixgbe 0000:03:00.0 enp3s0f0: NIC Link is Up 10 Gbps, Flow Control: RX/TX
[ 29.621400] bonding: bond0: link status up again after 600 ms for interface enp3s0f0.
[ 29.429252] bonding: bond1: link status down for interface enp3s0f1, disabling it in 1000 ms.
[ 29.824836] ixgbe 0000:03:00.1 enp3s0f1: detected SFP+: 4
[ 29.829564] bonding: bond1: link status down for interface enp131s0f0, disabling it in 1000 ms.
[ 30.069790] ixgbe 0000:03:00.1 enp3s0f1: NIC Link is Up 10 Gbps, Flow Control: RX/TX
[ 30.129796] bonding: bond1: link status up again after 700 ms for interface enp3s0f1.
[ 31.244068] IPv6: ADDRCONF(NETDEV_UP): br-mesh: link is not ready
[ 31.246724] ixgbe 0000:83:00.0 enp131s0f0: NIC Link is Up 10 Gbps, Flow Control: RX/TX
[ 31.250679] bonding: bond1: link status up again after 1000 ms for interface enp131s0f0.
[ 33.324580] device bond0 entered promiscuous mode
[ 33.324585] device enp131s0f1 entered promiscuous mode
[ 33.324995] device enp3s0f0 entered promiscuous mode
[ 33.337620] device bond1 entered promiscuous mode
[ 33.337624] device enp3s0f1 entered promiscuous mode
[ 33.338094] device enp131s0f0 entered promiscuous mode
-----------8<-----------
I reproduced the trace in a KVM vm running Trusty/3.13. Info below:
ubuntu@new-urchin:~$ uname -a
Linux new-urchin 3.13.0-145-generic #194-Ubuntu SMP Thu Apr 5 15:20:44 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
ubuntu@new-urchin:~$ dpkg -l | grep -E "vlan|ifenslave|bridge-utils"
ii bridge-utils 1.5-6ubuntu2 amd64 Utilities for configuring the Linux Ethernet bridge
ii ifenslave 2.4ubuntu1.2 all configure network interfaces for parallel routing (bonding)
ii vlan 1.9-3ubuntu10.5 amd64 user mode programs to enable VLANs on your ethernet devices
I added two extra e1000 NICs to the VM (eth1 and eth2). Both were configured to attach to a host-only network.
This occurred during the deployment of the DPDK compute. The DPDK was not deployed. The bond0 and bond1 configuration appears to have failed.
------- ----8<- ------- ---
Preliminary analysis: the LRO appears to be off after the kernel traces, despite the message. I am inquiring as to the nature of the failure.
This bug appears to be relevant: /bugs.launchpad .net/ubuntu/ +source/ linux/+ bug/1660146
https:/
All NICs involved: statistics: yes eeprom- access: yes register- dump: yes priv-flags: no
driver: ixgbe
version: 3.15.1-k
firmware-version: 0x800008bd
bus-info: 0000:03:00.0
supports-
supports-test: yes
supports-
supports-
supports-
Bonds:
bond0: Adding slave enp131s0f1.
bond0: Adding slave enp3s0f0.
bond1: Adding slave enp131s0f0.
bond1: Adding slave enp3s0f1.
Overview of dmesg:
[ 27.423698] bonding: bond0: link status definitely up for interface enp131s0f1, 10000 Mbps full duplex.
[ 27.423704] bonding: bond0: link status definitely up for interface enp3s0f0, 10000 Mbps full duplex.
[ 27.423711] bonding: bond1: link status definitely up for interface enp3s0f1, 10000 Mbps full duplex.
[ 27.423716] bonding: bond1: link status definitely up for interface enp131s0f0, 10000 Mbps full duplex.
[ 27.434165] device bond0.2004 entered promiscuous mode linux-90Gc2C/ linux-3. 13.0/net/ core/dev. c:1433 dev_disable_ lro+0x87/ 0x90()
[ 27.434692] ------------[ cut here ]------------
[ 27.434704] WARNING: CPU: 1 PID: 3942 at /build/
[ 27.434708] netdevice: bond0.2004
[ 27.434708] failed to disable LRO!
[ 27.523777] bonding: bond0: link status down for interface enp131s0f1, disabling it in 1000 ms.
[ 28.916892] ixgbe 0000:83:00.1 enp131s0f1: NIC Link is Up 10 Gbps, Flow Control: RX/TX
[ 28.920854] bonding: bond0: link status up again after 1000 ms for interface enp131s0f1.
[ 29.020935] bonding: bond0: link status down for interface enp3s0f0, disabling it in 1000 ms.
[ 29.401337] ixgbe 0000:03:00.0 enp3s0f0: detected SFP+: 3
[ 29.541388] ixgbe 0000:03:00.0 enp3s0f0: NIC Link is Up 10 Gbps, Flow Control: RX/TX
[ 29.621400] bonding: bond0: link status up again after 600 ms for interface enp3s0f0.
[ 29.338269] device bond1.2001 entered promiscuous mode linux-90Gc2C/ linux-3. 13.0/net/ core/dev. c:1433 dev_disable_ lro+0x87/ 0x90()
[ 29.338689] ------------[ cut here ]------------
[ 29.338699] WARNING: CPU: 0 PID: 3944 at /build/
[ 29.338702] netdevice: bond1.2001
[ 29.338702] failed to disable LRO!
[ 29.429252] bonding: bond1: link status down for interface enp3s0f1, disabling it in 1000 ms. NETDEV_ UP): br-mesh: link is not ready ----8<- ------- --- new-urchin: ~$ uname -a new-urchin: ~$ dpkg -l | grep -E "vlan|ifenslave |bridge- utils"
[ 29.824836] ixgbe 0000:03:00.1 enp3s0f1: detected SFP+: 4
[ 29.829564] bonding: bond1: link status down for interface enp131s0f0, disabling it in 1000 ms.
[ 30.069790] ixgbe 0000:03:00.1 enp3s0f1: NIC Link is Up 10 Gbps, Flow Control: RX/TX
[ 30.129796] bonding: bond1: link status up again after 700 ms for interface enp3s0f1.
[ 31.244068] IPv6: ADDRCONF(
[ 31.246724] ixgbe 0000:83:00.0 enp131s0f0: NIC Link is Up 10 Gbps, Flow Control: RX/TX
[ 31.250679] bonding: bond1: link status up again after 1000 ms for interface enp131s0f0.
[ 33.324580] device bond0 entered promiscuous mode
[ 33.324585] device enp131s0f1 entered promiscuous mode
[ 33.324995] device enp3s0f0 entered promiscuous mode
[ 33.337620] device bond1 entered promiscuous mode
[ 33.337624] device enp3s0f1 entered promiscuous mode
[ 33.338094] device enp131s0f0 entered promiscuous mode
-------
I reproduced the trace in a KVM vm running Trusty/3.13. Info below:
ubuntu@
Linux new-urchin 3.13.0-145-generic #194-Ubuntu SMP Thu Apr 5 15:20:44 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
ubuntu@
ii bridge-utils 1.5-6ubuntu2 amd64 Utilities for configuring the Linux Ethernet bridge
ii ifenslave 2.4ubuntu1.2 all configure network interfaces for parallel routing (bonding)
ii vlan 1.9-3ubuntu10.5 amd64 user mode programs to enable VLANs on your ethernet devices
I added two extra e1000 NICs to the VM (eth1 and eth2). Both were configured to attach to a host-only network.
Reproducer build:
apt install ifenslave vlan bridge-utils
echo bonding >> /etc/modules
Used this /e/n/i file: << EOF
auto lo
iface lo inet loopback
dns-nameservers 192.168.200.2
dns-search maas
auto ens6
iface ens6 inet static
dns-nameservers 192.168.200.2
gateway 192.168.200.1
address 192.168.200.65/24
mtu 1500
# Reproducer relevant config hash-policy layer3+4
auto bond0.2004
iface bond0.2004 inet manual
mtu 9100
vlan-raw-device bond0
auto bond0
iface bond0 inet manual
mtu 9100
bond-slaves eth1 eth2
bond-mode 802.3ad
bond-miimon 100
bond-use-carrier 1
bond-lacp-rate fast
bond-updelay 3000
bond-downdelay 1000
bond-ad-select bandwidth
bond-xmit-
post-up sleep 45
auto br-mesh
iface br-mesh inet static
bridge_ports bond0.2004
address 192.168.210.3/24
auto eth1
iface eth1 inet manual
mtu 9100
bond-master bond0
auto eth2
iface eth2 inet manual
mtu 9100
bond-master bond0
source /etc/network/ interfaces. d/*.cfg
EOF
Continued in Part 2
Part 2:
Rebooted, got this trace: linux-dwxnzD/ linux-3. 13.0/net/ core/dev. c:1433 dev_disable_ lro+0x87/ 0x90() 1ubuntu1~ cloud0 04/01/2014 607>] dump_stack+ 0x64/0x80 1cd>] warn_slowpath_ common+ 0x7d/0xa0 23c>] warn_slowpath_ fmt+0x4c/ 0x50 b37>] dev_disable_ lro+0x87/ 0x90 223>] br_add_ if+0x1f3/ 0x430 [bridge] c5d>] add_del_ if+0x5d/ 0x90 [bridge] 54b>] br_dev_ ioctl+0x5b/ 0x90 [bridge] 38e>] dev_ifsioc+ 0x31e/0x370 fd9>] ? dev_get_ by_name_ rcu+0x69/ 0x90 4d1>] dev_ioctl+ 0xf1/0x590 dce>] ? evict+0x11e/0x1b0 fad>] sock_do_ ioctl+0x4d/ 0x60 4e8>] sock_ioctl+ 0x1f8/0x2d0 7a7>] ? system_ call_after_ swapgs+ 0x141/0x170 bc3>] do_vfs_ ioctl+0x2e3/ 0x4d0 77d>] ? system_ call_after_ swapgs+ 0x117/0x170 776>] ? system_ call_after_ swapgs+ 0x110/0x170 76f>] ? system_ call_after_ swapgs+ 0x109/0x170 768>] ? system_ call_after_ swapgs+ 0x102/0x170 761>] ? system_ call_after_ swapgs+ 0xfb/0x170 75a>] ? system_ call_after_ swapgs+ 0xf4/0x170 753>] ? system_ call_after_ swapgs+ 0xed/0x170 74c>] ? system_ call_after_ swapgs+ 0xe6/0x170 e31>] SyS_ioctl+0x81/0xa0 71b>] ? system_ call_after_ swapgs+ 0xb5/0x170 7f0>] system_ call_fastpath+ 0x1a/0x1f
[ 3.832420] device bond0.2004 entered promiscuous mode
[ 3.832447] ------------[ cut here ]------------
[ 3.832452] WARNING: CPU: 0 PID: 1196 at /build/
[ 3.832453] netdevice: bond0.2004
[ 3.832453] failed to disable LRO!
[ 3.832454] Modules linked in: bridge 8021q garp stp mrp llc kvm_intel kvm serio_raw i2c_piix4 bonding mac_hid qxl psmouse e1000 virtio_scsi ttm drm_kms_helper drm pata_acpi floppy
[ 3.832467] CPU: 0 PID: 1196 Comm: brctl Not tainted 3.13.0-145-generic #194-Ubuntu
[ 3.832468] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-
[ 3.832469] 0000000000000000 ffff88003cad1c70 ffffffff81737607 ffff88003cad1cb8
[ 3.832471] 0000000000000009 ffff88003cad1ca8 ffffffff8106c1cd ffff88003cca1000
[ 3.832473] ffff88003cb80880 ffff88003cca1000 0000000000000000 ffff88003cbb3540
[ 3.832474] Call Trace:
[ 3.832478] [<ffffffff81737
[ 3.832481] [<ffffffff8106c
[ 3.832483] [<ffffffff8106c
[ 3.832493] [<ffffffff8163b
[ 3.832499] [<ffffffffa01f1
[ 3.832502] [<ffffffffa01f1
[ 3.832506] [<ffffffffa01f2
[ 3.832512] [<ffffffff8164e
[ 3.832518] [<ffffffff81634
[ 3.832521] [<ffffffff8164e
[ 3.832526] [<ffffffff811e0
[ 3.832529] [<ffffffff8161b
[ 3.832531] [<ffffffff8161c
[ 3.832534] [<ffffffff81748
[ 3.832536] [<ffffffff811d8
[ 3.832538] [<ffffffff81748
[ 3.832540] [<ffffffff81748
[ 3.832542] [<ffffffff81748
[ 3.832544] [<ffffffff81748
[ 3.832546] [<ffffffff81748
[ 3.832548] [<ffffffff81748
[ 3.832550] [<ffffffff81748
[ 3.832552] [<ffffffff81748
[ 3.832554] [<ffffffff811d8
[ 3.832556] [<ffffffff81748
[ 3.832558] [<ffffffff81748
[ 3.832559] ---[ end trace 996acb2f420d9ef7 ]---
Continued in part 3
Part: 3
Output of ip a: new-urchin: ~$ ip a UP,LOWER_ UP> mtu 65536 qdisc noqueue state UNKNOWN group default MULTICAST, UP,LOWER_ UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 ff:fe4e: bb3c/64 scope link MULTICAST, SLAVE,UP, LOWER_UP> mtu 9100 qdisc pfifo_fast master bond0 state UP group default qlen 1000 MULTICAST, SLAVE,UP, LOWER_UP> mtu 9100 qdisc pfifo_fast master bond0 state UP group default qlen 1000 MULTICAST, MASTER, UP,LOWER_ UP> mtu 9100 qdisc noqueue state UP group default ff:fe09: 6d/64 scope link MULTICAST, UP,LOWER_ UP> mtu 9100 qdisc noqueue master br-mesh state UP group default ff:fe09: 6d/64 scope link MULTICAST, UP,LOWER_ UP> mtu 9100 qdisc noqueue state UP group default ff:fe09: 6d/64 scope link
ubuntu@
1: lo: <LOOPBACK,
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens6: <BROADCAST,
link/ether 52:54:00:4e:bb:3c brd ff:ff:ff:ff:ff:ff
inet 192.168.200.65/24 brd 192.168.200.255 scope global ens6
valid_lft forever preferred_lft forever
inet6 fe80::5054:
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,
link/ether 52:54:00:09:00:6d brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,
link/ether 52:54:00:09:00:6d brd ff:ff:ff:ff:ff:ff
5: bond0: <BROADCAST,
link/ether 52:54:00:09:00:6d brd ff:ff:ff:ff:ff:ff
inet6 fe80::5054:
valid_lft forever preferred_lft forever
6: bond0.2004@bond0: <BROADCAST,
link/ether 52:54:00:09:00:6d brd ff:ff:ff:ff:ff:ff
inet6 fe80::5054:
valid_lft forever preferred_lft forever
7: br-mesh: <BROADCAST,
link/ether 52:54:00:09:00:6d brd ff:ff:ff:ff:ff:ff
inet 192.168.210.3/24 brd 192.168.210.255 scope global br-mesh
valid_lft forever preferred_lft forever
inet6 fe80::5054:
valid_lft forever preferred_lft forever