bonding does not works
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
systemd (Ubuntu) |
Expired
|
Undecided
|
Unassigned |
Bug Description
I'm working on libvirt/kvm hypervisor
I user 2 bondind interfaces:
eno1 + eno2 => bond0 via "netplan" ("service" interface for hypervisor)
eno3 + eno4 => bond1 via "ifupdown" ("virtualization" interface for libvirt)
Since last upgrade (today), i got "Warning, no 802.3ad response from the link partner ..." with dmesg, bonding doesn't longer works with "netplan" and "ifupdown".
Kernel problem ? ifenslave problem ?
Laurent Spagnol (laurent-spagnol) wrote : | #1 |
Justin Coffman (jcoffman) wrote : | #2 |
There are a number of problems with your configuration in both cases.
For netplan, you shouldn't specify any network-level configuration for an interface that is later expected to participate in a bond. Anything later in the file will clobber anything specified previously, leading to unexpected behavior (or just meaningless configuration entries). Also note that systemd-networkd does not rely on ifenslave for bonding.
For /e/n/i, you're trying to enslave an interface to a bond in two separate places. First by specifying bond-master for an interface, then by specifying a slave under the bond. Remove the physical interfaces from /e/n/i, the bond-slaves parameter for the bonded interface will handle configuring the physical interface. Also, if this is a direct paste of your configuration file, bond-miimon is misspelled. I suspect that might also cause issues.
You're also specifying physical interface configuration in two places: eno3/4 are both being manipulated by both netplan/networkd and /e/n/i.
Lastly, note that you can use a netplan/networkd bond for libvirt/KVM. Personally, I'd migrate your configuration to netplan. That's why I do for my KVM hosts.
Laurent Spagnol (laurent-spagnol) wrote : | #3 |
Ok.
But this config was working until the update of nplan !
Finally, the main issue is "man": i was not able to find a good doc about "netplan" ...
Netplan is installed by default instead of "ifupdown" but migration between them is painfull and netplan is very poorly documented.
Do you have good links to help me ?
Thanks in advance.
Laurent Spagnol (laurent-spagnol) wrote : | #4 |
There are some tries:
cat /etc/netplan/
network:
version: 2
renderer: networkd
bonds:
bond0:
interfaces:
- eno1
- eno2
parameters:
mode: 802.3ad
lacp-rate: fast
addresses: [10.5.1.174/24]
gateway4: 10.5.1.254
nameservers:
addresses: [193.50.208.4]
search: [univ-reims.fr]
# netplan generate
Error in network definition //etc/netplan/
# netplan apply
Error in network definition //etc/netplan/
So i have added something for eno1 and eno2 (mac addresses of eno1 and eno2)
cat /home/kvm-
network:
version: 2
renderer: networkd
ethernets:
eno1:
match:
macaddress: d0:94:66:0e:7e:07
eno2:
match:
macaddress: d0:94:66:0e:7e:08
bonds:
bond0:
interfaces:
- eno1
- eno2
parameters:
mode: 802.3ad
lacp-rate: fast
addresses: [10.5.1.174/24]
gateway4: 10.5.1.254
nameservers:
addresses: [193.50.208.4]
search: [univ-reims.fr]
# netplan generate
# netplan apply
The result is:
Mar 12 10:51:47 kvm-rs-02 kernel: [584368.010517] bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond
# cat /proc/net/
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2+3 (2)
MII Status: up
MII Polling Interval (ms): 0
Up Delay (ms): 0
Down Delay (ms): 0
802.3ad info
LACP rate: fast
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address: d0:94:66:0e:7e:08
Active Aggregator Info:
Aggregator ID: 1
Number of ports: 1
Actor Key: 0
Partner Key: 1
Partner Mac Address: 00:00:00:00:00:00
Slave Interface: eno2
MII Status: down
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: d0:94:66:0e:7e:08
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: monitoring
Partner Churn State: monitoring
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
system priority: 65535
system mac address: d0:94:66:0e:7e:08
port key: 0
port priority: 255
port number: 1
port state: 79
details partner lacp pdu:
system priority: 65535
system mac address: 00:00:00:00:00:00
oper key: 1
port priority: 255
port number: 1
port state: 1
Slave Interface: eno1
MII Status: down
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: d0:94:66:0e:7e:07
Slave queue ID: 0
Aggregator ID: 2
Actor Churn State: monitoring
Partner Churn State: monitoring
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
system priority: 65535
system mac address: d0:94:66:0e:7e:08
port key: 0
port priority: 255
port number: 2
port state: 71
details partner lacp pdu:
system priority: 65535
system mac address: 00:00:00:00:00:00
oper key: 1
port priority: 255
port number: 1
port state: ...
Laurent Spagnol (laurent-spagnol) wrote : | #5 |
Okay.
I have checked my conf files, i had used some examples found on Ubuntu website.
For ifupdown:
https:/
For nplan:
https:/
https:/
I saw that i MUST define something at nic level:
Error in network definition //etc/netplan/
I do some other tests.
This config does not work:
auto eno3
iface eno3 inet manual
bond-master bond1
auto eno4
iface eno4 inet manual
bond-master bond1
auto bond1
iface bond1 inet manual
bond-slaves eno3 eno4
bond-mode 802.3ad
bond-miion 100
bond-lacp-rate fast
bond-xmit_
Error =>
"bondX: Warning: No 802.3ad response from the link partner for any adapters in the bond"
"Slave Interface: enoxx
MII Status: down"
I do some changes: removing bond parameters from e/n/i, unload bonding modules and put module parameters in "modprobe"
/e/n/i (note that you MUST configure nic)
auto eno3
iface eno3 inet manual
bond-master bond1
auto eno4
iface eno4 inet manual
bond-master bond1
auto bond1
iface bond1 inet manual
bond-slaves eno3 eno4
/etc/modprobe.
alias bond0 bonding
alias bond1 bonding
options bonding mode=4 lacp_rate=1 xmit_hash_policy=2 miimon=100 downdelay=200 updelay=200
It WORKS !!
Laurent Spagnol (laurent-spagnol) wrote : | #6 |
So i have done same test with nplan:
/etc/netplan/
network:
version: 2
renderer: networkd
ethernets:
eno1:
dhcp4: false
dhcp6: false
eno2:
dhcp4: false
dhcp6: false
bonds:
bond0:
interfaces:
- eno1
- eno2
addresses: [10.5.1.174/24]
gateway4: 10.5.1.254
nameservers:
addresses: [193.50.208.4]
search: [univ-reims.fr]
Now, IT WORKS !!!
Conclusion: i don't know "why", but there is a workaround with the last update: bonding whith 802.3ad (lacp) DOEST NOT WORK correctly with ifupdown AND netplan if i put bonding parameters in /e/n/i or 01-netcfg.yaml !!!
Solution: remove ALL parameters from /e/n/i and put them in /etc/modprobe.
Laurent Spagnol (laurent-spagnol) wrote : | #7 |
Summary of WORKING CONFIGURATION:
/etc/modprobe.
alias bond0 bonding
alias bond1 bonding
options bonding mode=4 lacp_rate=1 xmit_hash_policy=2 miimon=100 downdelay=200 updelay=200
auto eno3
iface eno3 inet manual
bond-master bond1
auto eno4
iface eno4 inet manual
bond-master bond1
auto bond1
iface bond1 inet manual
bond-slaves eno3 eno4
/etc/netplan/
network:
version: 2
renderer: networkd
ethernets:
eno1:
dhcp4: false
dhcp6: false
eno2:
dhcp4: false
dhcp6: false
bonds:
bond0:
interfaces:
- eno1
- eno2
addresses: [10.5.1.174/24]
gateway4: 10.5.1.254
nameservers:
addresses: [193.50.208.4]
search: [univ-reims.fr]
Laurent Spagnol (laurent-spagnol) wrote : | #8 |
I miss some important informations ...
I'm running Ubuntu 18.04
My original configuration was working fine before updates (5/3/2018)
nplan 0.32 => 0.33
There was no updates for "ifupdown", but netplan AND ifupdown where not working anymore AFTER the update. So the problem is probably not nplan or ifupdown.
List of updates:
bsdutils:amd64 1:2.30.2-0.1ubuntu2 1:2.31.1-0.4ubuntu2
locales:all 2.26-0ubuntu2.1 2.27-0ubuntu2
libc6:amd64 2.26-0ubuntu2.1 2.27-0ubuntu2
libc-bin:amd64 2.26-0ubuntu2.1 2.27-0ubuntu2
debconf-i18n:all 1.5.65 1.5.66
python3-debconf:all 1.5.65 1.5.66
debconf:all 1.5.65 1.5.66
libquadmath0:amd64 8-20180208-0ubuntu1 8-20180218-1ubuntu1
gcc-8-base:amd64 8-20180208-0ubuntu1 8-20180218-1ubuntu1
libgcc1:amd64 1:8-20180208-
libgomp1:amd64 8-20180208-0ubuntu1 8-20180218-1ubuntu1
libstdc++6:amd64 8-20180208-0ubuntu1 8-20180218-1ubuntu1
e2fsprogs-l10n:all 1.43.9-1ubuntu1 1.43.9-2
libext2fs2:amd64 1.43.9-1ubuntu1 1.43.9-2
e2fsprogs:amd64 1.43.9-1ubuntu1 1.43.9-2
libuuid1:amd64 2.30.2-0.1ubuntu2 2.31.1-0.4ubuntu2
libblkid1:amd64 2.30.2-0.1ubuntu2 2.31.1-0.4ubuntu2
libfdisk1:amd64 2.30.2-0.1ubuntu2 2.31.1-0.4ubuntu2
libselinux1:amd64 2.7-2build1 2.7-2build2
libmount1:amd64 2.30.2-0.1ubuntu2 2.31.1-0.4ubuntu2
libsmartcols1:amd64 2.30.2-0.1ubuntu2 2.31.1-0.4ubuntu2
fdisk:amd64 2.30.2-0.1ubuntu2 2.31.1-0.4ubuntu2
util-linux:amd64 2.30.2-0.1ubuntu2 2.31.1-0.4ubuntu2
mount:amd64 2.30.2-0.1ubuntu2 2.31.1-0.4ubuntu2
libvirt-
udev:amd64 235-3ubuntu3 237-3ubuntu3
libudev1:amd64 235-3ubuntu3 237-3ubuntu3
libvirt-
libvirt-
libvirt-
libvirt0:amd64 4.0.0-1ubuntu3 4.0.0-1ubuntu4
systemd-sysv:amd64 235-3ubuntu3 237-3ubuntu3
libseccomp2:amd64 2.3.1-2.1ubuntu3 2.3.1-2.1ubuntu4
man-db:amd64 2.8.1-1 2.8.2-1
popularity-
uuid-runtime:amd64 2.30.2-0.1ubuntu2 2.31.1-0.4ubuntu2
perl-modules-
libglib2.0-0:amd64 2.54.1-1ubuntu1 2.55.2-2ubuntu1
qemu-utils:amd64 1:2.11+
qemu-system-
qemu-block-
libcom-err2:amd64 1.43.9-1ubuntu1 1.43.9-2
libgcrypt20:amd64 1.8.1-4 1.8.1-4ubuntu1
libsemanage-
libsemanage1:amd64 2.7-2build1 2.7-2build2
libss2:amd64 1.43.9-1ubuntu1 1.43.9-2
gpgv:amd64 2.1.15-1ubuntu8 2.2.4-1ubuntu1
multiarch-
ubuntu-keyring:all 2018.02.06 2018.02.28
file:amd64 1:5.32-1 1:5.32-2
libmagic1:amd64 1:5.32-1 1:5.32-2
libmagic-mgc:amd64 1:5.32-1 1:5.32-2
isc-dhcp-
isc-dhcp-
libglib2.0-data:all 2.54.1-1ubuntu1 2.55.2-2ubuntu1
libslang2:amd64 2.3.1a-1ubuntu1 2.3.1a-3ubuntu1
libssl1.1:amd64 1.1.0g-2ubuntu1 1.1.0g-2ubuntu2
openssl:amd64 1.1.0g-2ubuntu1 1.1.0g-2ubuntu2
ucf:all 3.0037 3.0038
vim:amd64 2:8.0.1401-1ubuntu2 2:8.0.1401-1ubuntu3
vim-tiny:amd64 2:8.0.1401-1ubuntu2 2:8.0....
Mathieu Trudel-Lapierre (cyphermox) wrote : | #9 |
Your original config in comment #1 is correct: you must specify the underlying devices, because those names are matched later for "interfaces:" in the bond config.
The issue with 802.3ad is likely a driver issue or a bug in systemd; the right mode needs to be set by networkd (which may require rebooting rather than just running 'netplan apply').
Reassigning to systemd for investigation, we do have unit / autopkg tests running for netplan which look like 802.3ad is correctly set, but they do not interact with other network devices (there are no other network devices in the test environment).
Furthermore, is anything beyond the physical network adapters configured to used 802.3ad? This is important for the bonding to work.
affects: | nplan (Ubuntu) → systemd (Ubuntu) |
Changed in systemd (Ubuntu): | |
status: | New → Incomplete |
Laurent Spagnol (laurent-spagnol) wrote : | #10 |
Physical interfaces are only used by bonding interfaces.
- bond0 have an ip address
- bond1 is user by bridges and vlans
I've solved my problem:
- by adding LACP parameters at module loading:
/etc/modprobe.
alias bond0 bonding
alias bond1 bonding
options bonding mode=4 lacp_rate=1 xmit_hash_policy=2 miimon=100 downdelay=200 updelay=200
- and nothing about bonding mode in ENI and netcfg.yaml !
My servers are "KVM/Libvirt" hosts.
* Bond0 is my "service" interface to the host
/etc/netplan/
# Exemple de configuration des interfaces réseau pour l'hyperviseur
# => '/etc/netplan/
#
# => 2 ports / LACP
# => La configuration des interfaces de virtualisation DOIT être effectuée
# via '/etc/network/
network:
version: 2
renderer: networkd
ethernets:
eno1:
dhcp4: false
dhcp6: false
eno2:
dhcp4: false
dhcp6: false
bonds:
bond0:
interfaces:
- eno1
- eno2
addresses: [10.5.1.174/24]
gateway4: 10.5.1.254
nameservers:
addresses: [193.50.208.4]
search: [univ-reims.fr]
* Bond1 is used by VMs.
Note that Virt-manager NEEDs access to /etc/network/
I put "bond1" configuration in a included file (/etc/network/
/etc/network/
# Exemple de configuration des interfaces réseau pour Libvirt
# => '/etc/network/
# => Libvirt peut accéder à ces interfaces
# => Définition des VLANs utilisés par Libvirt
source /etc/network/
auto vlan51
iface vlan51 inet manual
bridge_ports bond1.51
bridge_stp off
bridge_fd 0
auto vlan95
iface vlan95 inet manual
bridge_ports bond1.95
bridge_stp off
bridge_fd 0
auto vlan30
iface vlan30 inet manual
bridge_ports bond1.30
bridge_stp off
bridge_fd 0
/etc/network/
# Exemple de configuration des interfaces réseau pour Libvirt
# => '/etc/network/
# => Libvirt ne peut pas accéder à ces interfaces
# => 2 interfaces physiques/LACP
auto eno3
iface eno3 inet manual
bond-master bond1
auto eno4
iface eno4 inet manual
bond-master bond1
auto bond1
iface bond1 inet manual
bond-slaves eno3 eno4
Launchpad Janitor (janitor) wrote : | #11 |
[Expired for systemd (Ubuntu) because there has been no activity for 60 days.]
Changed in systemd (Ubuntu): | |
status: | Incomplete → Expired |
Unishop (unishop) wrote : | #12 |
Year later still same problem.
This bug affects "netplan" but also "ifudown". I guess it's an issue with ifenslave ?
My configs:
/etc/netplan/ 01-netcfg. yaml
transmit- hash-policy: layer2+3
network:
version: 2
renderer: networkd
ethernets:
eno1:
dhcp4: false
dhcp6: false
eno2:
dhcp4: false
dhcp6: false
eno3:
dhcp4: false
dhcp6: false
eno4:
dhcp4: false
dhcp6: false
bonds:
bond0:
interfaces:
- eno1
- eno2
parameters:
mode: 802.3ad
lacp-rate: fast
addresses: [10.5.1.172/24]
gateway4: 10.5.1.254
nameservers:
addresses: [193.50.208.4]
search: [univ-reims.fr]
I have tried to replace "lacp-rate: fast" with "lacp-rate: 1" whithout success
/etc/network/ interfaces. d/bond1
auto eno3
iface eno3 inet manual
bond-master bond1
auto eno4
iface eno4 inet manual
bond-master bond1
#auto bond1 hash_policy layer2+3
iface bond1 inet manual
bond-slaves eno3 eno4
bond-mode 802.3ad
bond-miion 100
bond-lacp-rate 1
bond-xmit_