LXD broken on vmware in 2.3rc2

Bug #1733882 reported by Merlijn Sebrechts
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Canonical Juju
Fix Released
High
Witold Krecicki

Bug Description

Version: 2.3-rc2-xenial-amd64
VMWare: Version 6.5.0.10000 Build 6816762

Our network setup is the following: We have two networks: a `primary-network`, running a dhcp server with ddns, and a `external-network` where users need to add an IP address manually. This is because public ip's are very scarce in our organization and we can't just give Juju an entire range.

With the current network setup, creating an LXD container on a VM makes the VM unreachable. If we remove the `external-network` from the setup, LXD containers never come up. Let me know if you need more logs. I might also be able to give you access to this cluster, if that's necessary.

# With external network

$ juju bootstrap vmware1 test-with-ext --config primary-network=V31_TENGU --config datastore=NFSSTORE1 --config external-network=V28_IBBTDMZ
$ juju add-machine
$ juju deploy ubuntu --to lxd:0
$ juju status
Model Controller Cloud/Region Version SLA
default test-with-ext vmware1/ILABT 2.3-rc2 unsupported

App Version Status Scale Charm Store Rev OS Notes
ubuntu waiting 0/1 ubuntu jujucharms 10 ubuntu

Unit Workload Agent Machine Public address Ports Message
ubuntu/0 waiting allocating 0/lxd/0 waiting for machine

Machine State DNS Inst id Series AZ Message
0 down 10.10.139.129 juju-b96dff-0 xenial poweredOn
0/lxd/0 pending pending xenial starting

ifconfig: http://paste.ubuntu.com/26020431/
interfaces-cloud-init: http://paste.ubuntu.com/26020432/
interfaces: http://paste.ubuntu.com/26020433/
machine-0.log: http://paste.ubuntu.com/26020434/

# Without external network

$ juju bootstrap vmware1 test-no-ext --config primary-network=V31_TENGU --config datastore=NFSSTORE1
$ juju add-machine
$ juju deploy ubuntu --to lxd:0
$ juju status
Model Controller Cloud/Region Version SLA
default test-no-ext vmware1/ILABT 2.3-rc2 unsupported

App Version Status Scale Charm Store Rev OS Notes
ubuntu waiting 0/1 ubuntu jujucharms 10 ubuntu

Unit Workload Agent Machine Public address Ports Message
ubuntu/0 waiting allocating 0/lxd/0 waiting for machine

Machine State DNS Inst id Series AZ Message
0 started 10.10.139.127 juju-7a3bcd-0 xenial poweredOn
0/lxd/0 pending juju-7a3bcd-0-lxd-0 xenial Container started

Logs of machine 0: http://paste.ubuntu.com/26020258/

$ cat /etc/network/interfaces
auto lo
iface lo inet loopback

auto lo
iface lo inet loopback

auto ens192
iface ens192 inet manual

auto br-ens192
iface br-ens192 inet dhcp
    bridge_ports ens192

$ cat /etc/network/interfaces.d/50-cloud-init.cfg
# This file is generated from information provided by
# the datasource. Changes to it will not persist across an instance.
# To disable cloud-init's network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
auto lo
iface lo inet loopback

auto ens192
iface ens192 inet dhcp

$ ifconfig
br-ens192 Link encap:Ethernet HWaddr 00:50:56:a4:f9:8f
          inet addr:10.10.139.127 Bcast:10.10.139.255 Mask:255.255.252.0
          inet6 addr: fe80::250:56ff:fea4:f98f/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
          RX packets:10730 errors:0 dropped:0 overruns:0 frame:0
          TX packets:7053 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:168101174 (168.1 MB) TX bytes:765955 (765.9 KB)

ens192 Link encap:Ethernet HWaddr 00:50:56:a4:f9:8f
          UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
          RX packets:117180 errors:0 dropped:0 overruns:0 frame:0
          TX packets:7086 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:175270310 (175.2 MB) TX bytes:774901 (774.9 KB)

lo Link encap:Local Loopback
          inet addr:127.0.0.1 Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING MTU:65536 Metric:1
          RX packets:160 errors:0 dropped:0 overruns:0 frame:0
          TX packets:160 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1
          RX bytes:11840 (11.8 KB) TX bytes:11840 (11.8 KB)

lxdbr0 Link encap:Ethernet HWaddr 1e:0c:08:bd:22:cb
          inet addr:10.0.125.1 Bcast:0.0.0.0 Mask:255.255.255.0
          inet6 addr: fe80::1c0c:8ff:febd:22cb/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:7 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B) TX bytes:570 (570.0 B)

vethDL2BEY Link encap:Ethernet HWaddr fe:00:0c:17:cf:27
          inet6 addr: fe80::fc00:cff:fe17:cf27/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
          RX packets:32 errors:0 dropped:0 overruns:0 frame:0
          TX packets:39 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:8856 (8.8 KB) TX bytes:7866 (7.8 KB)

description: updated
Revision history for this message
Tim Penhey (thumper) wrote :

What is the subnet cidr for the primary network?

I'm not entirely sure how we manage DHCP in containers. I'll defer to Witold.

It should be possible for you to configure the fan for the addressable containers within the controller, but since the primary network could be anything, this isn't done by default.

Changed in juju:
status: New → Incomplete
assignee: nobody → Witold Krecicki (wpk)
Revision history for this message
Merlijn Sebrechts (merlijn-sebrechts) wrote :

The primary network is

10.10.136.0 255.255.252.0

Revision history for this message
Merlijn Sebrechts (merlijn-sebrechts) wrote :

But to be clear, there is a bigger issue than the containers not being addressable; the containers can't even reach the controller so the NAT setup Juju used to do on non-maas clouds is broken too.

Ian Booth (wallyworld)
Changed in juju:
milestone: none → 2.3.1
importance: Undecided → High
status: Incomplete → Triaged
Changed in juju:
milestone: 2.3.1 → none
Tim Penhey (thumper)
Changed in juju:
milestone: none → 2.3.2
Witold Krecicki (wpk)
Changed in juju:
status: Triaged → In Progress
Revision history for this message
John A Meinel (jameinel) wrote :

"With the current network setup, creating an LXD container on a VM makes the VM unreachable."

That sounds like:
  https://bugs.launchpad.net/bugs/1737640

Which was a bug in the ubuntu-fan package, which has been updated and I believe SRU to all relevant Ubuntu releases. So that portion might already be fixed.

For the rest, we'll try to reproduce locally.

Revision history for this message
Merlijn Sebrechts (merlijn-sebrechts) wrote :

I don't think it's that bug. We experienced that bug a while ago, but that is fixed now. The issue we have is still present today.

Revision history for this message
Witold Krecicki (wpk) wrote :

Could you try bootstrapping the controller with "--config container-networking-method=local" added to the command line and then checking if it works? There was a bug in autoconfiguring container networking on networkless environments, setting it manually should help.

Revision history for this message
Witold Krecicki (wpk) wrote :
Revision history for this message
Merlijn Sebrechts (merlijn-sebrechts) wrote :

I can confirm that bootstrapping with "--config container-networking-method=local" works; both host and container has correct networking (but container network is host-local with nat).

Revision history for this message
Witold Krecicki (wpk) wrote :

As an experiment - could you check whether it works with --config container-networking-method=provider ? The container should get a DHCP-assigned IP from the physical network then, but that's (currently) not a supported feature on VMWare.

Witold Krecicki (wpk)
Changed in juju:
status: In Progress → Fix Committed
Revision history for this message
Merlijn Sebrechts (merlijn-sebrechts) wrote :

container-networking-method=provider results in an unreachable host and a container that is stuck on "starting".

Isn't container-networking-method=provider the default? That's what I expect at least.

Revision history for this message
Witold Krecicki (wpk) wrote :

The default is 'provider' for providers for which there is an explicit allocation of IP addresses for containers (MAAS), 'fan' for providers for which we can deduct the proper FAN settings (AWS, OpenStack, GCE), and 'local' for all other providers. Up until 2.2 the option was non-existent, but the behaviour was 'provider'-like for MAAS and 'local' for the rest.

Revision history for this message
Witold Krecicki (wpk) wrote :

BTW, IIRC there's a 'allow promiscuous' option for virtual switch in VMWare - do you have it set? If not - that's the probable reason why containers don't work in 'provider' mode.

Revision history for this message
Merlijn Sebrechts (merlijn-sebrechts) wrote :

Allow promiscuous gives a different result, but it doesn't seem to be working either; containers aren't accessble

Machine State DNS Inst id Series AZ Message
0 started 10.10.137.16 juju-9ebd18-0 xenial poweredOn
0/lxd/0 pending juju-9ebd18-0-lxd-0 xenial Container started

Changed in juju:
status: Fix Committed → Fix Released
Revision history for this message
Merlijn Sebrechts (merlijn-sebrechts) wrote :
Download full text (3.1 KiB)

@wpk

I'd like to get your confirmation that what we're seeing are the expected result and whether I need to file new bugs for 3, 4 and 5.

1. upgrading a 2.3.1 model to 2.3.2 doesn't fix LXD, neither on old nor new machines.
2. creating a new 2.3.2 model fixes LXD: the container network is now host-only (local).

3. container-networking-method=fan and fan-config=10.10.0.0/16=192.168.0.0/8 results in the error "failed to start machine 0/lxd/0 (unable to setup network: host machine "0" has no available FAN devices in space(s) ""), retrying in 10s (8 more attempts)"

4. container-networking-method=provider and an external network without IP address breaks the networking of both the host and the container.
5. container-networking-method=provider and only internal network breaks the networking of the container.
  This is with "promiscuous mode", "MAC address changes" and "forged transmits" allowed. If I look at the `ifconfig` output, it seems as if there is more wrong; the `lxdbr0` interface still has `10.0.187.1` as ip; as if the container network mode is "local"..

br-ens192 Link encap:Ethernet HWaddr 00:50:56:a4:15:06
          inet addr:10.10.137.58 Bcast:10.10.139.255 Mask:255.255.252.0
          inet6 addr: fe80::250:56ff:fea4:1506/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
          RX packets:14927 errors:0 dropped:0 overruns:0 frame:0
          TX packets:11837 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:166431103 (166.4 MB) TX bytes:859619 (859.6 KB)

ens192 Link encap:Ethernet HWaddr 00:50:56:a4:15:06
          UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
          RX packets:376714 errors:0 dropped:0 overruns:0 frame:0
          TX packets:11868 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:299912573 (299.9 MB) TX bytes:868937 (868.9 KB)

lo Link encap:Local Loopback
          inet addr:127.0.0.1 Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING MTU:65536 Metric:1
          RX packets:160 errors:0 dropped:0 overruns:0 frame:0
          TX packets:160 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1
          RX bytes:11840 (11.8 KB) TX bytes:11840 (11.8 KB)

lxdbr0 Link encap:Ethernet HWaddr b6:86:6c:b0:b6:1b
          inet addr:10.0.187.1 Bcast:0.0.0.0 Mask:255.255.255.0
          inet6 addr: fe80::b486:6cff:feb0:b61b/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:7 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B) TX bytes:570 (570.0 B)

vethB20SMK Link encap:Ethernet HWaddr fe:90:2f:2f:06:eb
          inet6 addr: fe80::fc90:2fff:fe2f:6eb/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
          RX packets:29 errors:0 dropped:0 overruns:0 frame:0
          TX packets:341 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:9150 (9.1 KB) TX bytes:90298...

Read more...

Revision history for this message
Witold Krecicki (wpk) wrote :

> 1. upgrading a 2.3.1 model to 2.3.2 doesn't fix LXD, neither on old nor new machines.
This is a bug, albeit low priority as the workaround is quite simple -
https://bugs.launchpad.net/juju/+bug/1744243

> 2. creating a new 2.3.2 model fixes LXD: the container network is now host-only (local).
OK

> 3. container-networking-method=fan and fan-config=10.10.0.0/16=192.168.0.0/8 results in the error "failed to start machine 0/lxd/0 (unable to setup network: host machine "0" has no available FAN devices in space(s) ""), retrying in 10s (8 more attempts)"
Could you try:
1. Setting juju log level to debug
2. Setting fan-config to 10.10.0.0/16=253.0.0.0/8 and container-networking-method to fan
3. Checking on host machine if a new fan-253 and ftun0 devices appeared
4. Adding a container on this machine

> 4. container-networking-method=provider and an external network without IP address breaks the networking of both the host and the container.
Do you have any way of checking what's happening on the machine? When creating the container it should create bridges for all the devices and connect the container to a proper one, something's definitely wrong here.

> 5. container-networking-method=provider and only internal network breaks the networking of the container.
> This is with "promiscuous mode", "MAC address changes" and "forged transmits" allowed. If I look at the `ifconfig` output, it seems as if there is more wrong; the `lxdbr0` interface still has `10.0.187.1` as ip; as if the container network mode is "local"..

lxdbr0 will always have the random 10/8 IP address, but in 'provider' method it's not used - the container should be bridged with the physical device - could you check the output of "brctl show" if the container veth device is bridged to the proper br-? If yes, what are the system logs inside the container saying?

Also, I'm wpk on freenode so feel free to ping me so that we maybe could debug those issues more 'realtime'.

Revision history for this message
Merlijn Sebrechts (merlijn-sebrechts) wrote :

Thanks for looking into it @wpk!

# 3 fan on vmware

fan-253 and ftun0 are present; ifconfig output http://paste.ubuntu.com/26417271/

LXD container is down: "unable to find host bridge for space(s) "" for container "2/lxd/0""

log output: https://paste.ubuntu.com/26417295/

# 4 provider with external network

juju-debug-log: http://paste.ubuntu.com/26417714/
brctl: http://paste.ubuntu.com/26417724
ifconfig: http://paste.ubuntu.com/26417728
journalctl: http://paste.ubuntu.com/26417729
route: http://paste.ubuntu.com/26417730

# 5 provider without external network

ifconfig output: http://paste.ubuntu.com/26417310/

It does seem as if the bridge is added correctly:

```
bridge name bridge id STP enabled interfaces
br-ens192 8000.005056a41506 no ens192
       vethB20SMK
lxdbr0 8000.000000000000 no
```

+---------------------+---------+------+------+------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+---------------------+---------+------+------+------------+-----------+
| juju-f87094-0-lxd-0 | RUNNING | | | PERSISTENT | 0 |
+---------------------+---------+------+------+------------+-----------+

full systemd log from that unit: http://paste.ubuntu.com/26417363/

Revision history for this message
Merlijn Sebrechts (merlijn-sebrechts) wrote :
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.