netplan can't set broadcast address

Bug #1780305 reported by Matt Stancliff
18
This bug affects 3 people
Affects Status Importance Assigned to Milestone
Netplan
New
Undecided
Unassigned

Bug Description

Spent a couple hours trying to use netplan recently, and it turns out it can't physically configure my interfaces. Oops.

OVH uses weird SDN floating IPs where each IP you get is a self-contained /32 (allows them to hand out full subnets without wasting IPs on router and broadcast addresses). For their setup, the broadcast address *must* be the public IP address itself (and the gateway is defined as a directly attached or on-link route).

Right now with a /32 IP, netplan defaults the broadcast address to 0.0.0.0.

As for fixing this on the netplan-side, netplan IP addresses are just a yaml list and not a yaml dict, so we can't easily attach a "broadcast" field to the existing IP list without also turning IPs into a field, which kills existing schemas.

Potential solutions:
If you see a /32, set the broadcast address to itself automatically (would this break other assumptions and/or configs? unsure).
Or, we'd need a new dict to bind explicit broadcast IP details to existing IP definitions like:
broadcasts:
  - ip: 1.2.3.4
    broadcast: 1.2.3.4

For now, I've skipped netplan and basically created my own mini-netplan in ansible to template the systemd network config with:
(using 2.2.2.2/32 as the host IP/network and 4.4.4.4 as the gateway IP)
  ethernets:
      interface: ens4
      addresses:
        - ip: 2.2.2.2/32
          broadcast: 2.2.2.2
      routes:
        - to: 4.4.4.4/32
          via: 4.4.4.4
          on-link: true
        - to: 0.0.0.0/0
          via: 4.4.4.4
          on-link: true

and here's the resulting systemd network config that must exist for the OVH networking to function:
[Match]
Name=ens4

[Network]
Address=2.2.2.2/32

[Address]
Address=2.2.2.2/32
Broadcast=2.2.2.2

[Route]
Destination=4.4.4.4/32
Gateway=4.4.4.4
GatewayOnlink=true

[Route]
Destination=0.0.0.0/0
Gateway=4.4.4.4
GatewayOnlink=true

Revision history for this message
adamretter (adam-retter) wrote :

I am also trying to use netplan on OVH. However I have found that the broadcast address being set to 0.0.0.0 doesn't seem to be a problem.

Instead I think the problem that I (and possibly you have) is getting the routes correctly set.

I started with something like this:

network:
    version: 2
    ethernets:
        ens3:
            addresses:
            - 54.36.67.139/32
            gateway4: 91.121.89.254
            match:
                macaddress: 06:00:00:42:9b:44
            nameservers:
                addresses:
                - 213.186.33.99
                search:
                - mydomain.com
            set-name: ens3

However when the interface comes up, if I run `route -n` then I see no routing entries:

Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface

Before Netplan, when using /etc/network/interfaces I had to do some tricks to configure the routing as specifying `gateway` did not work:

iface ens3 inet static
        address 54.36.67.139
        netmask 255.255.255.255
        network 54.36.67.139
        broadcast 5.196.205.132
        post-up /sbin/route add 91.121.89.254 dev ens3
        post-up /sbin/route add default gw 91.121.89.254
        pre-down /sbin/route del 91.121.89.254 dev ens3
        pre-down /sbin/route del default gw 91.121.89.254
        dns-nameservers 213.186.33.99
        dns-search mydomain.com

Although in some newer version of Ubuntu that seems to have been fixed and I can now just use ` gateway 91.121.89.254` instead of the post-up and post-down hooks.

After Netplan starts up with the config I provided above, if I manually run:

sudo route add 91.121.89.254 dev ens3
sudo route add default gw 91.121.89.254

then everything is working and I have network and Internet access. At that point `route -n` reports:

Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 91.121.89.254 0.0.0.0 UG 0 0 0 ens3
91.121.89.254 0.0.0.0 255.255.255.255 UH 0 0 0 ens3

Based on that information, I also tried the following Netplan config:

network:
    version: 2
    ethernets:
        ens3:
            addresses:
            - 54.36.67.139/32
            match:
                macaddress: 06:00:00:42:9b:44
            nameservers:
                addresses:
                - 213.186.33.99
                search:
                - mydomain.com
            set-name: ens3
            routes:
              - to: 91.121.89.254/32
                via: 0.0.0.0
              - to: 0.0.0.0/0
                via: 91.121.89.254

Unfortunately this only seems to register one of the two routes for some reason?!?. So no network or Internet access with that second Netplan config, and now `route -n` reports:

Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
91.121.89.254 0.0.0.0 255.255.255.255 UH 0 0 0 ens3

tags: added: routes
Revision history for this message
adamretter (adam-retter) wrote :

I was able to temporarily workaround the issue of setting my routes at startup by using networkd-dispatcher, and creating the following files as the root user with chmod u+x:

/usr/lib/networkd-dispatcher/routable.d/40-fix-routing-on
 #!/bin/sh
 /sbin/route add 91.121.89.254 dev ens3
 /sbin/route add default gw 91.121.89.254

/usr/lib/networkd-dispatcher/off.d/40-fix-routing-off
 #!/bin/sh
 /sbin/route del 91.121.89.254 dev ens3
 /sbin/route del default gw 91.121.89.254

Unfortunately this is not sustainable though, as I am using `uvt-kvm` to build out my VM guests with Netplan, and there is no way for me to inject those files into the VM build.

I think this routing issue is something that needs to be fixed in Netplan.

Revision history for this message
adamretter (adam-retter) wrote :

I finally managed to get a working Netplan config for OVH where the routing works, and where I don't need the networkd-dispatcher workarounds:

network:
    version: 2
    ethernets:
        ens3:
            addresses:
            - 54.36.67.139/32
            gateway4: 91.121.89.254
            match:
                macaddress: 06:00:00:42:9b:44
            nameservers:
                addresses:
                - 213.186.33.99
                search:
                - mydomain.com
            set-name: ens3
            routes:
            - to: 91.121.89.254/32
              via: 0.0.0.0
              scope: link

@mattsta perhaps this also solves your issue?

Revision history for this message
Matt Stancliff (mattsta) wrote :

Interesting fixes! Thanks for the shorter gateway-vs-double-route tip. It's working for me here too.

I ran into a new problem my previous fix (which was only for single IP-per-host) didn't cover today (now converting my dozen-IP box from old RH to bionic), but finally found how to repair it with some poorly named systemd ini stanzas.

Turns out the systemd section that looks like:

[Network]
Address=blah

should have Address= actually named "Subnet=" because that's what it wants. The official systemd docs don't seem to know the difference between "address" and "subnet" when it wants one vs. the other (and the docs seem to imply your [Address]Address= should match your [Network]Address= entry exactly, but that's wrong).

So, even with multiple subnets on one interface, now we can just configure full subnet [Network]Address= sections then the rest of the [Address]Address= sections don't need broadcast addresses (which systemd wasn't picking up on with multiple IPs per interface anyway).

Remaining issue: can netplan directly configure multiple [Network] sections with full subnet ranges while specifying exact [Address] sections of /32 each?

My working config now looks something like:
=================================
[Network]
Address=4.4.4.1/29
Gateway=7.7.7.7

[Network]
Address=5.5.5.1/30
Gateway=7.7.7.7

[Address]
Address=4.4.4.1/32

[Address]
Address=4.4.4.2/32

[Address]
Address=5.5.5.1/32

[Address]
Address=5.5.5.2/32

[Route]
Destination=0.0.0.0/0
Gateway=7.7.7.7
GatewayOnlink=true
=================================

You can tell if it's working because "ip route show table local dev ens4" (or whatever interface you're using) will show two broadcast entries per defined subnet.

My confusion came from previously configuring these aliases on CentOS where they still use interface aliases (eth0:0) and each alias is basically configured as an independent device, so each device has its own IP+broadcast settings. But, systemd uses more modern interface lists instead, where everything just piles on the interface itself, so each IP obviously can't have its own broadcast address (since that's a property of a link+subnet, not an IP).

Even the official OVH instructions at https://docs.ovh.com/gb/en/dedicated/network-ipaliasing/ don't quite tell us the whole story about hosting multiple failover subnets on one systemd host. Plus, their RedHat configs don't translate to systemd at all since the config systems end up using different OS tools for implementations.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.