bionic: LXD containers don't get fan network IP addresses

Bug #1756040 reported by James Page
22
This bug affects 3 people
Affects Status Importance Assigned to Milestone
Canonical Juju
Fix Released
High
Unassigned
2.3
Won't Fix
High
Unassigned
cloud-init (Ubuntu)
Invalid
Undecided
Unassigned
lxd (Ubuntu)
Invalid
Undecided
Unassigned

Bug Description

Juju Version: 2.3.4
Substrate: OpenStack

I'm deploying a bundle which makes use of LXD containers, leveraging fan networking for communication between containers and hosts; containers start and have the following config snippet:

devices:
  eth0:
    hwaddr: 00:16:3e:01:7e:0f
    mtu: "8908"
    name: eth0
    nictype: bridged
    parent: fan-252
    type: nic

The container has a NIC:

root@juju-4e2443-0-lxd-0:~# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
13: eth0@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8908 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:01:7e:0f brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::216:3eff:fe01:7e0f/64 scope link
       valid_lft forever preferred_lft forever

which is marked up and the host devices:

# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8958 qdisc fq_codel state UP group default qlen 1000
    link/ether fa:16:3e:c6:8a:fa brd ff:ff:ff:ff:ff:ff
    inet 10.5.0.4/16 brd 10.5.255.255 scope global dynamic ens3
       valid_lft 85454sec preferred_lft 85454sec
    inet6 fe80::f816:3eff:fec6:8afa/64 scope link
       valid_lft forever preferred_lft forever
3: fan-252: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8908 qdisc noqueue state UP group default qlen 1000
    link/ether e6:86:08:94:73:dc brd ff:ff:ff:ff:ff:ff
    inet 252.0.4.1/8 scope global fan-252
       valid_lft forever preferred_lft forever
    inet6 fe80::e486:8ff:fe94:73dc/64 scope link
       valid_lft forever preferred_lft forever
4: ftun0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8908 qdisc noqueue master fan-252 state UNKNOWN group default qlen 1000
    link/ether e6:86:08:94:73:dc brd ff:ff:ff:ff:ff:ff
    inet6 fe80::e486:8ff:fe94:73dc/64 scope link
       valid_lft forever preferred_lft forever
5: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 82:2c:33:fd:91:81 brd ff:ff:ff:ff:ff:ff
6: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 2e:5f:c2:1e:4a:40 brd ff:ff:ff:ff:ff:ff
7: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 42:f2:f3:0e:18:37 brd ff:ff:ff:ff:ff:ff
    inet 10.201.15.1/24 scope global lxdbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::40f2:f3ff:fe0e:1837/64 scope link
       valid_lft forever preferred_lft forever
8: br-ex: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 62:c9:e1:f9:44:48 brd ff:ff:ff:ff:ff:ff
10: veth9PCNPR@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8908 qdisc noqueue master fan-252 state UP group default qlen 1000
    link/ether fe:54:70:62:e2:a5 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::fc54:70ff:fe62:e2a5/64 scope link
       valid_lft forever preferred_lft forever
12: vethH72G7D@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8908 qdisc noqueue master fan-252 state UP group default qlen 1000
    link/ether fe:38:7d:fc:da:90 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::fc38:7dff:fefc:da90/64 scope link
       valid_lft forever preferred_lft forever
14: vethRVTEFS@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8908 qdisc noqueue master fan-252 state UP group default qlen 1000
    link/ether fe:7a:0d:78:95:71 brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet6 fe80::fc7a:dff:fe78:9571/64 scope link
       valid_lft forever preferred_lft forever

look to be mapped correctly into the fan network - the fan dnsmasq process is running; and then I found this in the cloud-init log:

2018-03-15 10:35:03,625 - stages.py[WARNING]: Failed to rename devices: Failed to apply network config names. Found bad network config version: None
2018-03-15 10:35:03,627 - util.py[WARNING]: failed stage init-local
failed run of stage init-local
------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/cloudinit/cmd/main.py", line 650, in status_wrapper
    ret = functor(name, args)
  File "/usr/lib/python3/dist-packages/cloudinit/cmd/main.py", line 357, in main_init
    init.apply_network_config(bring_up=bool(mode != sources.DSMODE_LOCAL))
  File "/usr/lib/python3/dist-packages/cloudinit/stages.py", line 654, in apply_network_config
    return self.distro.apply_network_config(netcfg, bring_up=bring_up)
  File "/usr/lib/python3/dist-packages/cloudinit/distros/__init__.py", line 171, in apply_network_config
    dev_names = self._write_network_config(netconfig)
  File "/usr/lib/python3/dist-packages/cloudinit/distros/debian.py", line 119, in _write_network_config
    return self._supported_write_network_config(netconfig)
  File "/usr/lib/python3/dist-packages/cloudinit/distros/__init__.py", line 90, in _supported_write_network_config
    renderer.render_network_config(network_config=network_config)
  File "/usr/lib/python3/dist-packages/cloudinit/net/renderer.py", line 53, in render_network_config
    network_state=parse_net_config_data(network_config), target=target)
  File "/usr/lib/python3/dist-packages/cloudinit/net/netplan.py", line 193, in render_network_state
    content = self._render_content(network_state)
  File "/usr/lib/python3/dist-packages/cloudinit/net/netplan.py", line 227, in _render_content
    if network_state.version == 2:
AttributeError: 'NoneType' object has no attribute 'version'

ProblemType: Bug
DistroRelease: Ubuntu 18.04
Package: lxd 3.0.0~beta3-0ubuntu3
ProcVersionSignature: Ubuntu 4.15.0-10.11-generic 4.15.3
Uname: Linux 4.15.0-10-generic x86_64
ApportVersion: 2.20.8-0ubuntu10
Architecture: amd64
Date: Thu Mar 15 10:42:58 2018
Ec2AMI: ami-00000456
Ec2AMIManifest: FIXME
Ec2AvailabilityZone: nova
Ec2InstanceType: m1.xlarge
Ec2Kernel: unavailable
Ec2Ramdisk: unavailable
JournalErrors:
 -- Logs begin at Thu 2018-03-15 10:29:35 UTC, end at Thu 2018-03-15 10:43:02 UTC. --
 Mar 15 10:29:59 hostname iscsid[1048]: iSCSI daemon with pid=1051 started!
 Mar 15 10:32:46 hostname systemd[1]: Failed to start Router advertisement daemon for IPv6.
ProcEnviron:
 TERM=screen
 PATH=(custom, no user)
 LANG=C.UTF-8
 SHELL=/bin/bash
SourcePackage: lxd
UpgradeStatus: No upgrade log present (probably fresh install)

Revision history for this message
James Page (james-page) wrote :
James Page (james-page)
summary: - LXD containers don't get fan network IP addresses
+ bionic: LXD containers don't get fan network IP addresses
Revision history for this message
John A Meinel (jameinel) wrote : Re: [Bug 1756040] [NEW] bionic: LXD containers don't get fan network IP addresses
Download full text (8.3 KiB)

I'll note that I tried just launching a bionic container using snap
3.0.0.beta5 and after "lxc launch ubuntu:x" and "lxc launch ubuntu-daily:b"
neither of them came up with an IP address.

I might have broken my networking on this machine because I was trying to
install stock Juju which tries to bring in stock lxd, ignoring the snap,
but I'm also seeing failures for containers to get IP addresses.

On Thu, Mar 15, 2018 at 2:47 PM, James Page <email address hidden> wrote:

> Public bug reported:
>
> Juju Version: 2.3.4
> Substrate: OpenStack
>
> I'm deploying a bundle which makes use of LXD containers, leveraging fan
> networking for communication between containers and hosts; containers
> start and have the following config snippet:
>
> devices:
> eth0:
> hwaddr: 00:16:3e:01:7e:0f
> mtu: "8908"
> name: eth0
> nictype: bridged
> parent: fan-252
> type: nic
>
>
> The container has a NIC:
>
> root@juju-4e2443-0-lxd-0:~# ip addr
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group
> default qlen 1000
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
> valid_lft forever preferred_lft forever
> inet6 ::1/128 scope host
> valid_lft forever preferred_lft forever
> 13: eth0@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8908 qdisc noqueue
> state UP group default qlen 1000
> link/ether 00:16:3e:01:7e:0f brd ff:ff:ff:ff:ff:ff link-netnsid 0
> inet6 fe80::216:3eff:fe01:7e0f/64 scope link
> valid_lft forever preferred_lft forever
>
> which is marked up and the host devices:
>
> # ip addr
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group
> default qlen 1000
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
> valid_lft forever preferred_lft forever
> inet6 ::1/128 scope host
> valid_lft forever preferred_lft forever
> 2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8958 qdisc fq_codel state
> UP group default qlen 1000
> link/ether fa:16:3e:c6:8a:fa brd ff:ff:ff:ff:ff:ff
> inet 10.5.0.4/16 brd 10.5.255.255 scope global dynamic ens3
> valid_lft 85454sec preferred_lft 85454sec
> inet6 fe80::f816:3eff:fec6:8afa/64 scope link
> valid_lft forever preferred_lft forever
> 3: fan-252: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8908 qdisc noqueue
> state UP group default qlen 1000
> link/ether e6:86:08:94:73:dc brd ff:ff:ff:ff:ff:ff
> inet 252.0.4.1/8 scope global fan-252
> valid_lft forever preferred_lft forever
> inet6 fe80::e486:8ff:fe94:73dc/64 scope link
> valid_lft forever preferred_lft forever
> 4: ftun0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8908 qdisc noqueue master
> fan-252 state UNKNOWN group default qlen 1000
> link/ether e6:86:08:94:73:dc brd ff:ff:ff:ff:ff:ff
> inet6 fe80::e486:8ff:fe94:73dc/64 scope link
> valid_lft forever preferred_lft forever
> 5: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group
> default qlen 1000
> link/ether 82:2c:33:fd:91:81 brd ff:ff:ff:ff:ff:ff
> 6: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group
> defau...

Read more...

Ryan Beisner (1chb1n)
tags: added: uosci
Revision history for this message
Stéphane Graber (stgraber) wrote :

Does manually running dhclient against eth0 in the container work?

Looks like cloud-init is a bit unhappy about your provided network-config, which likely causes the netplan config to be skipped which in turns leads to no networkd config and so no ipv4 config in the container.

Revision history for this message
John A Meinel (jameinel) wrote :
Download full text (9.0 KiB)

restarting my machine and disconnecting from the VPN made them work, so I
don't think it was snap Lxd or bionic at fault.

I'm not 100% sure if bionic and netplan play nicely with fan configuration.

can you confirm if both the host machine and the containers are bionic or
is it just the containers that are bionic?

John
=:->

On Mar 15, 2018 15:45, "John Meinel" <email address hidden> wrote:

> I'll note that I tried just launching a bionic container using snap
> 3.0.0.beta5 and after "lxc launch ubuntu:x" and "lxc launch ubuntu-daily:b"
> neither of them came up with an IP address.
>
> I might have broken my networking on this machine because I was trying to
> install stock Juju which tries to bring in stock lxd, ignoring the snap,
> but I'm also seeing failures for containers to get IP addresses.
>
>
> On Thu, Mar 15, 2018 at 2:47 PM, James Page <email address hidden> wrote:
>
>> Public bug reported:
>>
>> Juju Version: 2.3.4
>> Substrate: OpenStack
>>
>> I'm deploying a bundle which makes use of LXD containers, leveraging fan
>> networking for communication between containers and hosts; containers
>> start and have the following config snippet:
>>
>> devices:
>> eth0:
>> hwaddr: 00:16:3e:01:7e:0f
>> mtu: "8908"
>> name: eth0
>> nictype: bridged
>> parent: fan-252
>> type: nic
>>
>>
>> The container has a NIC:
>>
>> root@juju-4e2443-0-lxd-0:~# ip addr
>> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group
>> default qlen 1000
>> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>> inet 127.0.0.1/8 scope host lo
>> valid_lft forever preferred_lft forever
>> inet6 ::1/128 scope host
>> valid_lft forever preferred_lft forever
>> 13: eth0@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8908 qdisc noqueue
>> state UP group default qlen 1000
>> link/ether 00:16:3e:01:7e:0f brd ff:ff:ff:ff:ff:ff link-netnsid 0
>> inet6 fe80::216:3eff:fe01:7e0f/64 scope link
>> valid_lft forever preferred_lft forever
>>
>> which is marked up and the host devices:
>>
>> # ip addr
>> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group
>> default qlen 1000
>> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>> inet 127.0.0.1/8 scope host lo
>> valid_lft forever preferred_lft forever
>> inet6 ::1/128 scope host
>> valid_lft forever preferred_lft forever
>> 2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8958 qdisc fq_codel state
>> UP group default qlen 1000
>> link/ether fa:16:3e:c6:8a:fa brd ff:ff:ff:ff:ff:ff
>> inet 10.5.0.4/16 brd 10.5.255.255 scope global dynamic ens3
>> valid_lft 85454sec preferred_lft 85454sec
>> inet6 fe80::f816:3eff:fec6:8afa/64 scope link
>> valid_lft forever preferred_lft forever
>> 3: fan-252: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8908 qdisc noqueue
>> state UP group default qlen 1000
>> link/ether e6:86:08:94:73:dc brd ff:ff:ff:ff:ff:ff
>> inet 252.0.4.1/8 scope global fan-252
>> valid_lft forever preferred_lft forever
>> inet6 fe80::e486:8ff:fe94:73dc/64 scope link
>> valid_lft forever preferred_lft forever
>> 4: ftun0: <BROADCAST,MULTICAST,UP,...

Read more...

Revision history for this message
James Page (james-page) wrote :

This is bionic top to bottom.

Revision history for this message
Scott Moser (smoser) wrote :

Hi,
can you run:
 cloud-init collect-logs
and also get
 /var/lib/cloud/seed/

Changed in cloud-init (Ubuntu):
status: New → Incomplete
Revision history for this message
Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in lxd (Ubuntu):
status: New → Confirmed
John A Meinel (jameinel)
Changed in juju:
status: New → Triaged
importance: Undecided → High
milestone: none → 2.4-beta1
Revision history for this message
Stéphane Graber (stgraber) wrote :

Closing LXD task for now as it looks like a cloud-init/nplan problem instead, the LXD plumbing seems fine.

Changed in lxd (Ubuntu):
status: Confirmed → Invalid
Changed in juju:
milestone: 2.4-beta1 → none
Revision history for this message
John A Meinel (jameinel) wrote :

We should also check into https://bugs.launchpad.net/juju/+bug/1751739 and see if there is a consistent issue here.

Changed in juju:
milestone: none → 2.5-beta1
Revision history for this message
Heather Lanigan (hmlanigan) wrote :

See also: https://bugs.launchpad.net/juju/+bug/1784730, could be a duplicate

Changed in juju:
assignee: nobody → Richard Harding (rharding)
Changed in juju:
milestone: 2.5-beta1 → 2.5-beta2
Changed in juju:
assignee: Richard Harding (rharding) → nobody
milestone: 2.5-beta2 → 2.5.1
Ian Booth (wallyworld)
Changed in juju:
milestone: 2.5.1 → 2.5.2
Revision history for this message
Richard Harding (rharding) wrote :

This appears to have been addressed and not updated in 2.5 release as well as a 2.4 release.

Changed in juju:
status: Triaged → Fix Released
Revision history for this message
Ryan Harper (raharper) wrote :

Comment #3 correctly diagnoses the issue (bad network-config) which caused cloud-init to error out. And Comment #11 suggests that this configuration has been resolved. I'm closing the cloud-init task here. Please re-open if you believe there is a cloud-init issue that needs fixing.

Changed in cloud-init (Ubuntu):
status: Incomplete → Invalid
Revision history for this message
Anastasia (anastasia-macmood) wrote :

Removing from a milestone as this work will not be done in 2.3 series.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.