MAAS create extraneous fabrics when using multiple interfaces and VLANs
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
MAAS |
Invalid
|
Undecided
|
Unassigned | ||
1.9 |
Won't Fix
|
High
|
Unassigned | ||
2.0 |
Invalid
|
Undecided
|
Unassigned |
Bug Description
When MAAS first creates the cluster controller, it auto-detects the existing network configuration and creates the appropriate subnets, VLANs, and fabrics. This seems to work fine with a single physical interface w/VLANs, but when when the controller has two interfaces with multiple VLANs, MAAS creates a fabric for each VLAN on the secondary interface.
For example, my controller has the following and can be reproduced by setting up a similar configuration and then installing MAAS normally:
- 4 NICs, bonded in pairs to create bond0 and bond1
- bond0 is set with a static IP in the untagged VLAN for PXE boot, and then has 4 additional VLAN interfaces. Call them bond0.100, bond0.101, bond0.102, and bond0.103
- bond1 has no IP, but has 4 VLANs. Call them bond1.200, bond1.201, and bond1.202, and bond1.203
After install of MAAS 1.9, the cluster controller auto-detects everything correctly, and creates fabric0 that contains all VLANs attached to bond0, as well as the untagged vlan. But then it creates separate fabrics for each of the VLANs on bond1, each with an untagged VLAN, and the VLAN it pulled from the interface. The end result is 5 fabrics:
fabric0: untagged, VLAN100, VLAN101, VLAN102, VLAN103
fabric1: untagged, VLAN200
...
fabric4: untagged, VLAN203
I was able to fix this using the MAAS CLI, but it seems to me that MAAS should create a single fabric for each physical port (or bonded port) it finds. I should have seen something like this after installing MAAS:
fabric0: untagged, VLAN100, VLAN101, VLAN102, VLAN103
fabric1: untagged, VLAN200, VLAN201, VLAN202, VLAN203
Related branches
- MAAS Maintainers: Pending requested
-
Diff: 89 lines (+66/-0)1 file modifiedsrc/maasserver/tests/test_forms_nodegroup.py (+66/-0)
- MAAS Maintainers: Pending requested
-
Diff: 126 lines (+83/-2) (has conflicts)2 files modifiedrequired-packages/base (+17/-0)
src/maasserver/tests/test_forms_nodegroup.py (+66/-2)
no longer affects: | maas/1.10 |
Changed in maas: | |
milestone: | next → none |
Hi Mike,
What you are seeing is actually intended. Each cluster controller will create 1 fabric per connected interface. This is because there's no way for MAAS to know they are on the same or different fabric.
Initially, MAAS used to do as what you suggest (everything in the same fabric), but then we received feedback to do the contrary, which is the current behavior.
While this is a great suggest, we won't be addressing it. However, we'd like to keep the bug open to demonstrate user scenarios and we can discuss this later to see if the fabric creation changes.