can not add vlans to a bond network device

Bug #352384 reported by Brazen
22
This bug affects 2 people
Affects Status Importance Assigned to Milestone
ifupdown (Ubuntu)
Fix Released
Undecided
Unassigned
Nominated for Jaunty by Brazen

Bug Description

I did this with Ubuntu Jaunty beta with all the updates as of 3/31/2009.

I have a computer with two nics and a switch that supports 802.3ad link aggregation (lag) and vlans. I can use lag and I can use vlans with Ubuntu on the computer, but I can not use both with each other at the same time. I know this is possible to do because it works fine with VMWare ESX Server on the computer with vlan trunking on top of 802.3ad bonded nics.

Setting up vlan trunking of vlans 101 and 102 works with this configuration in /etc/network/interfaces:
---
auto eth0.101
iface eth0.101 inet dhcp

auto eth0.102
iface eth0.102 inet dhcp
---

Removing vlans and setting up 802.3ad link aggregation on the switch works with this configuration:
---
auto bond0
iface bond0 inet dhcp
     slaves all
     bond-mode 4
     bond-miimon 100
---

But setting up vlan trunking on the switch over the lag device does not work with this configuration:
---
auto bond0.101
iface bond0.101 inet dhcp
     slaves all
     bond-mode 4
     bond-miimon 100
---

When I bring up the device I get this error: "ERROR: trying to add VLAN #101 to IF -:bond0:- error: Operation not supported". Some reading and googling around suggests that this error is because the vlan is trying to be created before the bond device is created.

It would only make sense that you could combine the vlan and bonding configurations and have it work as expected. This is a configuration that has huge advantages in a high-availability virtualization environment.

Revision history for this message
Soren Hansen (soren) wrote :

I think it's because "slaves all" tries to include the vlan interface itself. Could you try explicitly listing the devices you want to bond and see if that fixes it? If it does, I have a patch that should fix it.

Revision history for this message
Márcio Santos (marcio.santos) wrote :

I had a similar problem a while back.

I was trying to bring the vlans up in a similar way to what you are doing, and i had the following configuration:

auto vlan50
iface vlan50 inet dhcp
 bond-mode 802.3ad
 bond-miimon 100
 xmit_hash_policy layer2+3
 lacp_rate slow
 slaves all

auto vlan60
iface vlan60 inet dhcp
 bond-mode 802.3ad
 bond-miimon 100
 xmit_hash_policy layer2+3
 lacp_rate slow
 slaves all

That would create havoc with my network configuration, because it would try and create a bond using all interfaces including the vlans when it brought up vlan 50 and then fail when trying to bring up vlan60 because the slaves where allready bonded in vlan 50.

It seems the correct way to use bonding and vlans is to:

# Create the Link Aggregation without assigning it an address and defining the interfaces to enslave
auto bond0
iface bond0 inet manual
 bond-mode 802.3ad
 bond-miimon 100
 xmit_hash_policy layer2+3
 lacp_rate slow
 slaves eth0 eth1 eth2 eth3

# Create vlan50
auto vlan50
iface vlan50 inet dhcp
        vlan_raw_device bond0

# Create vlan60
auto vlan60
iface vlan60 inet dhcp
        vlan_raw_device bond0

Could you please try the above configuration adapted to your needs and see if it works?

Revision history for this message
Brazen (jdinkel) wrote :

Using my original configuration, it still does not work whether I use "slaves all" or "slaves eth0 eth1".

On a side note, from Marcio's config I added 'xmit_hash_policy layer2+3' and 'lacp_rate slow' to my plain bond0 config and it solved a speed issue I had with mode 4 bonds. I have not done enough testing to see if it was one option or the other that improved it.

Next I will try Marcio's vlan on bonding config...

Revision history for this message
Brazen (jdinkel) wrote :

Ok, confirming Marcio's post, this configuration works like a charm:

---
auto bond0
iface bond0 inet manual
     bond-mode 4
     bond-miimon 100
     xmit_hash_policy layer2+3
     lacp_rate slow
     slaves eth0 eth1

auto vlan101
iface vlan101 inet dhcp
        vlan_raw_device bond0

---

Revision history for this message
Márcio Santos (marcio.santos) wrote :

Good to hear it worked for you.

I believe the configuration process should be included in the server guide since it is a desired configuration in most production environments.

Will try and find out how i can get the info inserted into the guide.

Kind regards,

Márcio Santos

Revision history for this message
David Tombs (dgtombs) wrote :

Although this was a configuration issue, the reporter posted a message on ubuntu-server regarding this and suggesting a change: <https://lists.ubuntu.com/archives/ubuntu-server/2009-April/002792.html>.

I am therefore assigning to ifupdown so they can figure out how to act on this.

affects: ubuntu → ifupdown (Ubuntu)
Revision history for this message
Stéphane Graber (stgraber) wrote :

This has been fixed in Precise.

Changed in ifupdown (Ubuntu):
status: New → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.