lxd containers fail to get IP address assigned when created by Juju

Bug #1784730 reported by Pedro Guimarães
This bug report is a duplicate of:  Bug #1779897: container already exists. Edit Remove
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Canonical Juju
Triaged
High
Unassigned
2.4
New
Undecided
Unassigned

Bug Description

Using Juju 2.4.1-bionic-amd64 on a Openstack-over-Openstack deploy; hosts are running LXD 3.0.1; deployed following overcloud bundle: https://github.com/openstack-charmers/openstack-bundles/blob/master/development/openstack-base-bionic-queens/bundle.yaml; LXD containers are using FAN-networking

Deployment current status: https://pastebin.canonical.com/p/FFmB53tKYv/

Cloud-init is failing to run on Juju-created LXD containers:
https://pastebin.ubuntu.com/p/8ZXzsGccXv/

Running dhclient on one of the containers get an IP which is not related to fan-networking.
Containers manually created using LXC are working (although they do not use fan-networking).

Similar bugs have been reported:
https://bugs.launchpad.net/juju/+bug/1762700 - describes same problem but states that issue has been solved on Juju 2.4.1 (which is mine version)
https://bugs.launchpad.net/ubuntu/+source/lxd/+bug/1756040 - does not mark 2.4.1

As suggested on Bug 1762700, comment #26:
/var/lib/cloud/seed/nocloud-net# cat network-config
Returns:
network:
  config: "disabled"

ip addr list on one of the containers return:
root@juju-5ed1b3-5-lxd-2:~# ip addr list
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
14: eth0@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8908 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:e5:7e:19 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::216:3eff:fee5:7e19/64 scope link
       valid_lft forever preferred_lft forever
16: eth1@if17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:aa:13:66 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::216:3eff:feaa:1366/64 scope link
       valid_lft forever preferred_lft forever

juju crashdump -m default fails with:
Command "timeout 45s juju run --all 'sh -c "sudo find /etc/alternatives /etc/ceph /etc/cinder /etc/glance /etc/keystone /etc/netplan /etc/network /etc/neutron /etc/nova /etc/quantum /etc/swift /opt/nedge/var/log /run/cloud-init /usr/share/lxc/config /var/lib/charm /var/lib/libvirt/filesystems/plumgrid-data/log /var/lib/libvirt/filesystems/plumgrid/var/log /var/log /var/lib/juju /var/lib/lxd/containers/*/rootfs/etc/alternatives /var/lib/lxd/containers/*/rootfs/etc/ceph /var/lib/lxd/containers/*/rootfs/etc/cinder /var/lib/lxd/containers/*/rootfs/etc/glance /var/lib/lxd/containers/*/rootfs/etc/keystone /var/lib/lxd/containers/*/rootfs/etc/netplan /var/lib/lxd/containers/*/rootfs/etc/network /var/lib/lxd/containers/*/rootfs/etc/neutron /var/lib/lxd/containers/*/rootfs/etc/nova /var/lib/lxd/containers/*/rootfs/etc/quantum /var/lib/lxd/containers/*/rootfs/etc/swift /var/lib/lxd/containers/*/rootfs/opt/nedge/var/log /var/lib/lxd/containers/*/rootfs/run/cloud-init /var/lib/lxd/containers/*/rootfs/usr/share/lxc/config /var/lib/lxd/containers/*/rootfs/var/lib/charm /var/lib/lxd/containers/*/rootfs/var/lib/libvirt/filesystems/plumgrid-data/log /var/lib/lxd/containers/*/rootfs/var/lib/libvirt/filesystems/plumgrid/var/log /var/lib/lxd/containers/*/rootfs/var/log /var/lib/lxd/containers/*/rootfs/var/lib/juju -mount -type f -size -5000000c -o -size 5000000c 2>/dev/null | sudo tar -pcf /tmp/juju-dump-f7a2d9b9-e4aa-4e16-a9ca-31bfe9fda145.tar --files-from - 2>/dev/null; sudo tar --append -f /tmp/juju-dump-f7a2d9b9-e4aa-4e16-a9ca-31bfe9fda145.tar -C /tmp/f7a2d9b9-e4aa-4e16-a9ca-31bfe9fda145/addon_output . || true"'" failed
Command "juju scp 4/lxd/0:/tmp/juju-dump-f7a2d9b9-e4aa-4e16-a9ca-31bfe9fda145.tar 4efcde4c-7746-47aa-926d-f96eebd3ec6a.tar" failed
Command "tar -pxf 4efcde4c-7746-47aa-926d-f96eebd3ec6a.tar -C 4/lxd/0" failed
Command "rm 4efcde4c-7746-47aa-926d-f96eebd3ec6a.tar" failed

Tags: lxd
Revision history for this message
Heather Lanigan (hmlanigan) wrote :

Bug https://bugs.launchpad.net/juju/+bug/1762700, was caused by a error in a juju script: /etc/network/interfaces.py. See #18 for more info. Do you see errors related to interfaces.py in the logs in this case? There were some overlapping cloud init issues there, not sure if they go resolved, but unrelated to the juju piece of the bug.

It looks like 1756040 should be still marked against 2.4, on the cloudinit piece, could you please provide the data per #6? That cloudinit trace keeps happening and we should really get that thing resolved.

Changed in juju:
milestone: none → 2.5-beta1
status: New → Triaged
importance: Undecided → High
tags: added: lxd
Revision history for this message
Pedro Guimarães (pguimaraes) wrote :

Not sure if it is related to this bug, but another issue that I am facing on this same deploy is that some of the containers are accusing as already existing.
This is a fresh deploy, from a couple of weeks ago. I have only one controller, which configures everything.

Machine State DNS Inst id Series AZ Message
4 started 10.5.0.8 85489d7b-a094-40b3-9083-430940037081 bionic nova ACTIVE
4/lxd/0 started 10.143.150.26 juju-5ed1b3-4-lxd-0 bionic nova Container started
4/lxd/1 pending juju-5ed1b3-4-lxd-1 bionic nova Container started
4/lxd/2 pending juju-5ed1b3-4-lxd-2 bionic nova Container started
5 started 10.5.0.15 821a9119-30b3-4f85-8b94-12e0896b72a3 bionic nova ACTIVE
5/lxd/0 pending juju-5ed1b3-5-lxd-0 bionic nova Container started
5/lxd/1 down pending bionic Container 'juju-5ed1b3-5-lxd-1' already exists
5/lxd/2 pending juju-5ed1b3-5-lxd-2 bionic nova Container started
6 started 10.5.0.7 8202c07f-23f8-4ee9-a6aa-1951635254a3 bionic nova ACTIVE
6/lxd/0 down pending bionic Container 'juju-5ed1b3-6-lxd-0' already exists
6/lxd/1 pending juju-5ed1b3-6-lxd-1 bionic nova Container started
6/lxd/2 pending juju-5ed1b3-6-lxd-2 bionic nova Container started
7 started 10.5.0.10 42a3a3c7-b982-41aa-962e-ad46bcb5b7a8 bionic nova ACTIVE
7/lxd/0 pending juju-5ed1b3-7-lxd-0 bionic nova Container started
7/lxd/1 pending juju-5ed1b3-7-lxd-1 bionic nova Container started
7/lxd/2 pending juju-5ed1b3-7-lxd-2 bionic nova Container started

Revision history for this message
Joseph Phillips (manadart) wrote :

@pguimaraes You appear to be seeing this one:
https://bugs.launchpad.net/juju/+bug/1779897

It is fixed in the 2.4 and 2.5 edge channels.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.