Netplan not creating .network file under certain conditions

Bug #1726478 reported by James Denton
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
nplan (Ubuntu)
New
Undecided
Unassigned

Bug Description

OS: Ubuntu 17.10

Here's the scenario:

network:
  version: 2
  ethernets:
    ens160:
      addresses: [10.50.0.9/24]
      gateway4: 10.50.0.1
      nameservers:
        addresses: [8.8.8.8,8.8.4.4]
  vlans:
    ens160.100:
      id: 100
      link: ens160
    ens160.200:
      id: 200
      link: ens160
    ens160.300:
      id: 300
      link: ens160
  bridges:
    br-mgmt:
      interfaces: [ens160.100]
      addresses: [192.168.100.1/24]
    br-overlay:
      interfaces: [ens160.200]
      addresses: [192.168.200.1/24]
    br-storage:
      interfaces: [ens160.300]

In this configuration, a 'netplan apply' applies the following configuration:

root@ubuntu:~# brctl show
bridge name bridge id STP enabled interfaces
br-mgmt 8000.7282c779e298 no ens160.100
br-overlay 8000.ce8884059e7b no ens160.200
br-storage 8000.b6fc84976f4f no ens160.300

root@ubuntu:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:a8:92:85 brd ff:ff:ff:ff:ff:ff
    inet 10.50.0.9/24 brd 10.50.0.255 scope global ens160
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fea8:9285/64 scope link
       valid_lft forever preferred_lft forever
4: br-overlay: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ce:88:84:05:9e:7b brd ff:ff:ff:ff:ff:ff
    inet 192.168.200.1/24 brd 192.168.200.255 scope global br-overlay
       valid_lft forever preferred_lft forever
    inet6 fe80::cc88:84ff:fe05:9e7b/64 scope link
       valid_lft forever preferred_lft forever
5: br-mgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 72:82:c7:79:e2:98 brd ff:ff:ff:ff:ff:ff
    inet 192.168.100.1/24 brd 192.168.100.255 scope global br-mgmt
       valid_lft forever preferred_lft forever
    inet6 fe80::7082:c7ff:fe79:e298/64 scope link
       valid_lft forever preferred_lft forever
6: ens160.200@ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-overlay state UP group default qlen 1000
    link/ether 00:50:56:a8:92:85 brd ff:ff:ff:ff:ff:ff
7: ens160.100@ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-mgmt state UP group default qlen 1000
    link/ether 00:50:56:a8:92:85 brd ff:ff:ff:ff:ff:ff
8: ens160.300@ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-storage state UP group default qlen 1000
    link/ether 00:50:56:a8:92:85 brd ff:ff:ff:ff:ff:ff
9: br-storage: <BROADCAST,MULTICAST> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether b6:fc:84:97:6f:4f brd ff:ff:ff:ff:ff:ff

root@ubuntu:~# ls -l /run/systemd/network
total 48
-rw-r--r-- 1 root root 34 Oct 23 16:09 10-netplan-br-mgmt.netdev
-rw-r--r-- 1 root root 57 Oct 23 16:09 10-netplan-br-mgmt.network
-rw-r--r-- 1 root root 37 Oct 23 16:09 10-netplan-br-overlay.netdev
-rw-r--r-- 1 root root 60 Oct 23 16:09 10-netplan-br-overlay.network
-rw-r--r-- 1 root root 37 Oct 23 16:09 10-netplan-br-storage.netdev
-rw-r--r-- 1 root root 50 Oct 23 16:09 10-netplan-ens160.100.netdev
-rw-r--r-- 1 root root 89 Oct 23 16:09 10-netplan-ens160.100.network
-rw-r--r-- 1 root root 50 Oct 23 16:09 10-netplan-ens160.200.netdev
-rw-r--r-- 1 root root 92 Oct 23 16:09 10-netplan-ens160.200.network
-rw-r--r-- 1 root root 50 Oct 23 16:09 10-netplan-ens160.300.netdev
-rw-r--r-- 1 root root 92 Oct 23 16:09 10-netplan-ens160.300.network
-rw-r--r-- 1 root root 142 Oct 23 16:09 10-netplan-ens160.network

Here's the problem: In our environment, there's no need to have an IP address on a bridge, and this has been reflected in the yaml for the br-storage interface. Without an IP address assigned via dhcp: true or addresses: [x.x.x.x/x], the br-storage interface is configured in a DOWN state and won't pass traffic. I can set the interface UP with iproute2 and it will pass traffic as expected.

With ifupdown, we were able to bring up the br-storage interface at boot with the following config:

auto br-storage
iface br-storage inet manual

During troubleshooting, I found that without specifying an address in the yaml, netplan is not creating the respective .network file for the interface in /run/systemd/network. I created a file named '10-netplan-br-storage.network' with the following contents:

[Match]
Name=br-storage

[Network]

Restarted systemd-networkd with the following:

systemctl restart systemd-networkd

As a result, the br-storage interface came up and bridged traffic as expected:

root@ubuntu:/run/systemd/network# ip link show br-storage
9: br-storage: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether b6:fc:84:97:6f:4f brd ff:ff:ff:ff:ff:ff

To resolve this, perhaps .network files can still get created when either 'dhcp: false' or 'addresses: []' or something along those lines, but the [network] block can be empty as shown above.

Revision history for this message
Daniel Axtens (daxtens) wrote :

Hi,

(I'm trying to tidy up some old bugs so apologies if this seems a bit out of the blue!)

This is a known issue and I'm pretty sure what you describe is an exact duplicate of LP: #1736975. (perhaps looking at the dates it would be fairer to say that that issue is a duplicate of this, as this was filed first! oh well.)

I'm going to mark it as a duplicate for now but if that's wrong please let me know.

Regards,
Daniel

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.