netplan 0.106: lxd container: no dhcp4 on match macaddress

Bug #2022947 reported by Chad Smith
30
This bug affects 11 people
Affects Status Importance Assigned to Milestone
Netplan
Won't Fix
Low
Unassigned

Bug Description

[Affected platform/series]
 netplan.io 0.106 on Lunar and Mantic
 LXD containers and custom network version 2 config matching by macaddress

[Details]
When providing match: [macaddress=<MAC>] on LXD container with Netplan 0.106, no IPv4 address is setup.

This is because netplan.io 0.106 now emits PemanentMACAddress [Match] clause in /run/systemd/network instead of MACAdddress which means LXD veth interfaces 'transient' MAC addresses are not matched/managed.

[Impact]
Network version 2 configs on LXD containers which provide custom cloud-init.network-config or user.network-config will not bring up IPv4 addresses via DHCP when the network config attempts to match by mac address. Custom network providers will likely have to match by some other condition for veth devices.

This could be a problem if multiple veth interfaces exist because a `match: name:` condition doesn't seem to support matching specific veth deviced by full name `eth0@if202`.

Note: matching by macaddress does not affect LXD VMs as they are using physical devices.

[Steps to reproduce]
1. Launch lunar or mantic LXD container with a custom MAC address and attempt to setup network based on the prescribed MAC. See failure to get IPv4 address.

2. Revert PermanentMACAddress -> MACAddress in /run/systemd/network/*, `networkctl reload` check that IPv4 address comes up.

cat > network.yaml <<EOF
version: 2
ethernets:
  eth0:
    dhcp4: true
    match:
      macaddress: 00:16:3e:c8:00:db
EOF
lxc launch ubuntu-daily:lunar -c volatile.eth0.hwaddr=00:16:3e:c8:00:db -c cloud-init.network-config="$(cat network.yaml)" netplan106ByMAC
# Note no ipv4 address
lxc ls netplan106

+-----------------+---------+------+--------------------------------------------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+-----------------+---------+------+--------------------------------------------+-----------+-----------+
| netplan106ByMAC | RUNNING | | fd42:e810:4b9b:718:216:3eff:fec8:db (eth0) | CONTAINER | 0 |
+-----------------+---------+-----

lxc exec netplan106ByMAC cat /run/systemd/network/10-netplan-eth0.network
[Match]
PermanentMACAddress=00:16:3e:c8:00:db

[Network]
DHCP=ipv4
LinkLocalAddressing=ipv6

[DHCP]
RouteMetric=100
UseMTU=true

When changing the match clause in systemd/network from PermamentMACAddres to MACAddress dhcpv4 is properly allocated:

lxc exec netplan106ByMAC -- sed -i s/Permanent// /run/systemd/network/10-netplan-eth0.network
lxc exec netplan106ByMAC -- networkctl reload
lxc ls netplan106
+-----------------+---------+-----------------------+--------------------------------------------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+-----------------+---------+-----------------------+--------------------------------------------+-----------+-----------+
| netplan106ByMAC | RUNNING | 10.125.221.233 (eth0) | fd42:e810:4b9b:718:216:3eff:fec8:db (eth0) | CONTAINER | 0 |
+-----------------+---------+-----------------------+--------------------------------------------+-----------+-----------+

Chad Smith (chad.smith)
description: updated
tags: added: foundations-todo
Revision history for this message
Danilo Egea Gondolfo (danilogondolfo) wrote :

Hi Chad,

We were aware that this change in behavior could cause this kind of problem. Although, "match" is supposed to be used with physical devices. Matching by permanent MAC address is also the default behavior on Network Manager. So, Netplan was not being consistent when considering both backends.

In this scenario in particular, as far as I can tell, using "match" is not really necessary.

This is a LXC container with 2 interfaces:

root@up-turkey:~# ip l
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
15: eth0@if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 00:16:3e:c8:00:db brd ff:ff:ff:ff:ff:ff link-netnsid 0
17: eth1@if18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 00:16:3e:7b:6c:b4 brd ff:ff:ff:ff:ff:ff link-netnsid 0

and this is the netplan config inside the container:

network:
  version: 2
  ethernets:
    eth1:
      dhcp4: true
      dhcp6: false
    eth0:
      addresses:
      - "172.16.0.1/24"
      dhcp4: false
      dhcp6: false

both interfaces will be configured accordingly:

+-------------------------+---------+---------------------+-----------------------------------------------+-----------------+-----------+
| up-turkey | RUNNING | 172.16.0.1 (eth0) | fd42:ee65:61d0:abcb:216:3eff:fe7b:6cb4 (eth1) | CONTAINER | 0 |
| | | 10.33.59.105 (eth1) | fd42:bc43:e20e:8cf7:216:3eff:fec8:db (eth0) | | |
+-------------------------+---------+---------------------+-----------------------------------------------+-----------------+-----------+

Do you think that matching my MAC address is really necessary inside containers? If there is a use case that can be only satisfied by that, we'll need to introduce a "match.transientmacaddress" property. But as far as I remember, it would only work for networkd as network manager only matches by permanent MAC address.

Changed in netplan:
status: New → Triaged
Lukas Märdian (slyon)
Changed in netplan:
importance: Undecided → Low
Revision history for this message
Chad Smith (chad.smith) wrote :

Thank your Lukas and Danilo. This expectation solves any concerns cloud-init has about how to approach "complex" networking and veth device matches in containers. I'm glad we can document the expected behavior and approach needed. I feel this could be closed as WONT FIX if you like as there is a reasonable workaround for such cases.

Lukas Märdian (slyon)
Changed in netplan:
status: Triaged → Won't Fix
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.