lxc-clone does not randomize MAC address - juju-local machines have same IPs

Bug #1362224 reported by Max Brustkern
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
lxc (Ubuntu)
Confirmed
High
Stéphane Graber

Bug Description

When I run juju with the local provider on utopic, all of the containers are created with the same MAC address. I tried adding lxc-clone: true and lxc-clone-aufs: true to my environments.yaml local section, but now I can't get a bootstrap to complete, with or without those options. I get:
max@ursus:~/bzr/merges/nfss-insert-timestamp$ juju bootstrap
uploading tools for series [precise trusty utopic]
Logging to /home/max/.juju/local/cloud-init-output.log on remote host
Bootstrapping Juju machine agent
Starting Juju machine agent (juju-agent-max-local)
345b964937a7e650135bca9443637a92bacf83dfaae1323589f1946a010b97ff /home/max/.juju/local/tools/1.20.5.1-utopic-amd64/tools.tar.gz
2014-08-27 15:37:04 INFO juju.cmd supercommand.go:37 running jujud [1.20.5.1-utopic-amd64 gc]
2014-08-27 15:37:04 DEBUG juju.agent agent.go:377 read agent config, format "1.18"
2014-08-27 15:37:04 INFO juju.provider.local environprovider.go:42 opening environment "local"
2014-08-27 15:37:04 DEBUG juju.cmd.jujud bootstrap.go:210 starting mongo
2014-08-27 15:37:04 DEBUG juju.cmd.jujud bootstrap.go:235 calling ensureMongoServer
2014-08-27 15:37:04 INFO juju.mongo mongo.go:171 Ensuring mongo server is running; data directory /home/max/.juju/local; port 37017
2014-08-27 15:37:04 INFO juju.mongo mongo.go:326 installing juju-mongodb
2014-08-27 15:37:04 INFO juju.utils.apt apt.go:128 Running: [apt-get --option=Dpkg::Options::=--force-confold --option=Dpkg::options::=--force-unsafe-io --assume-yes --quiet install juju-mongodb]
2014-08-27 15:37:05 DEBUG juju.mongo mongo.go:275 using mongod: /usr/lib/juju/bin/mongod --version: "db version v2.4.10\nWed Aug 27 11:37:05.432 git version: nogitversion\n"
2014-08-27 15:37:05 DEBUG juju.worker.peergrouper initiate.go:41 Initiating mongo replicaset; dialInfo &mgo.DialInfo{Addrs:[]string{"127.0.0.1:37017"}, Direct:false, Timeout:300000000000, FailFast:false, Database:"", Source:"", Service:"", Mechanism:"", Username:"", Password:"", PoolLimit:0, DialServer:(func(*mgo.ServerAddr) (net.Conn, error))(nil), Dial:(func(net.Addr) (net.Conn, error))(0x5be180)}; memberHostport "10.0.3.1:37017"; user ""; password ""
2014-08-27 15:37:05 DEBUG juju.mongo open.go:96 connection failed, will retry: dial tcp 127.0.0.1:37017: connection refused
2014-08-27 15:37:05 DEBUG juju.mongo open.go:96 connection failed, will retry: dial tcp 127.0.0.1:37017: connection refused
2014-08-27 15:37:05 INFO juju.mongo open.go:104 dialled mongo successfully
2014-08-27 15:37:05 INFO juju.replicaset replicaset.go:67 Initiating replicaset with config replicaset.Config{Name:"juju", Version:1, Members:[]replicaset.Member{replicaset.Member{Id:1, Address:"10.0.3.1:37017", Arbiter:(*bool)(nil), BuildIndexes:(*bool)(nil), Hidden:(*bool)(nil), Priority:(*float64)(nil), Tags:map[string]string{"juju-machine-id":"0"}, SlaveDelay:(*time.Duration)(nil), Votes:(*int)(nil)}}}
2014-08-27 15:37:05 WARNING juju.replicaset replicaset.go:87 Initiate: fetching replication status failed: cannot get replica set status: Received replSetInitiate - should come online shortly.
2014-08-27 15:37:06 WARNING juju.replicaset replicaset.go:87 Initiate: fetching replication status failed: cannot get replica set status: Received replSetInitiate - should come online shortly.
2014-08-27 15:37:06 WARNING juju.replicaset replicaset.go:87 Initiate: fetching replication status failed: cannot get replica set status: Received replSetInitiate - should come online shortly.
2014-08-27 15:37:07 WARNING juju.replicaset replicaset.go:87 Initiate: fetching replication status failed: cannot get replica set status: Received replSetInitiate - should come online shortly.
2014-08-27 15:37:08 WARNING juju.replicaset replicaset.go:87 Initiate: fetching replication status failed: cannot get replica set status: Received replSetInitiate - should come online shortly.
2014-08-27 15:37:08 WARNING juju.replicaset replicaset.go:87 Initiate: fetching replication status failed: cannot get replica set status: Received replSetInitiate - should come online shortly.
2014-08-27 15:37:09 WARNING juju.replicaset replicaset.go:87 Initiate: fetching replication status failed: cannot get replica set status: Received replSetInitiate - should come online shortly.
2014-08-27 15:37:09 WARNING juju.replicaset replicaset.go:87 Initiate: fetching replication status failed: cannot get replica set status: Received replSetInitiate - should come online shortly.
2014-08-27 15:37:10 WARNING juju.replicaset replicaset.go:87 Initiate: fetching replication status failed: cannot get replica set status: Received replSetInitiate - should come online shortly.
2014-08-27 15:37:10 INFO juju.mongo open.go:104 dialled mongo successfully
2014-08-27 15:37:10 WARNING juju.replicaset replicaset.go:87 Initiate: fetching replication status failed: cannot get replica set status: Received replSetInitiate - should come online shortly.
2014-08-27 15:37:11 WARNING juju.replicaset replicaset.go:87 Initiate: fetching replication status failed: cannot get replica set status: Received replSetInitiate - should come online shortly.
2014-08-27 15:37:11 WARNING juju.replicaset replicaset.go:87 Initiate: fetching replication status failed: cannot get replica set status: Received replSetInitiate - should come online shortly.
2014-08-27 15:37:12 WARNING juju.replicaset replicaset.go:87 Initiate: fetching replication status failed: cannot get replica set status: Received replSetInitiate - should come online shortly.
2014-08-27 15:37:12 WARNING juju.replicaset replicaset.go:87 Initiate: fetching replication status failed: cannot get replica set status: Received replSetInitiate - should come online shortly.
2014-08-27 15:37:13 WARNING juju.replicaset replicaset.go:87 Initiate: fetching replication status failed: cannot get replica set status: Received replSetInitiate - should come online shortly.
2014-08-27 15:37:13 WARNING juju.replicaset replicaset.go:87 Initiate: fetching replication status failed: cannot get replica set status: Received replSetInitiate - should come online shortly.
2014-08-27 15:37:14 WARNING juju.replicaset replicaset.go:87 Initiate: fetching replication status failed: cannot get replica set status: Received replSetInitiate - should come online shortly.
2014-08-27 15:37:14 WARNING juju.replicaset replicaset.go:87 Initiate: fetching replication status failed: cannot get replica set status: Received replSetInitiate - should come online shortly.
2014-08-27 15:37:15 WARNING juju.replicaset replicaset.go:87 Initiate: fetching replication status failed: cannot get replica set status: Received replSetInitiate - should come online shortly.
2014-08-27 15:37:15 WARNING juju.replicaset replicaset.go:87 Initiate: fetching replication status failed: cannot get replica set status: Received replSetInitiate - should come online shortly.
2014-08-27 15:37:16 INFO juju.worker.peergrouper initiate.go:63 replica set initiated
2014-08-27 15:37:16 INFO juju.worker.peergrouper initiate.go:71 finished MaybeInitiateMongoServer
2014-08-27 15:37:16 INFO juju.cmd.jujud bootstrap.go:135 started mongo
2014-08-27 15:37:16 INFO juju.mongo open.go:104 dialled mongo successfully
2014-08-27 15:37:18 DEBUG juju.agent bootstrap.go:88 initializing address [127.0.0.1:37017]
2014-08-27 15:37:18 INFO juju.state open.go:47 opening state, mongo addresses: ["127.0.0.1:37017"]; entity ""
2014-08-27 15:37:18 DEBUG juju.state open.go:48 dialing mongo
2014-08-27 15:37:18 INFO juju.mongo open.go:104 dialled mongo successfully
2014-08-27 15:37:18 DEBUG juju.state open.go:53 connection established
2014-08-27 15:37:18 INFO juju.mongo open.go:104 dialled mongo successfully
2014-08-27 15:37:18 INFO juju.state open.go:260 adding state server info to legacy environment
2014-08-27 15:37:18 INFO juju.state open.go:276 found existing state servers []
2014-08-27 15:37:19 INFO juju.state open.go:93 initializing environment
2014-08-27 15:37:19 DEBUG juju.agent bootstrap.go:93 connected to initial state
2014-08-27 15:37:20 DEBUG juju.agent bootstrap.go:131 adding admin user
2014-08-27 15:37:20 DEBUG juju.agent bootstrap.go:142 setting password hash for admin user
2014-08-27 15:37:20 INFO juju.provider.local environprovider.go:42 opening environment "local"
2014-08-27 15:37:21 INFO juju.agent bootstrap.go:165 initialising bootstrap machine with config: {Addresses:[public:localhost local-cloud:10.0.3.1] Constraints: Jobs:[JobManageEnviron] InstanceId:localhost Characteristics: SharedSecret:<<I don't think I need to post this>>}
2014-08-27 15:37:21 INFO juju.provider.local environprovider.go:42 opening environment "local"
2014-08-27 15:37:21 DEBUG juju.agent bootstrap.go:193 create new random password for machine 0
2014-08-27 15:37:22 INFO juju.cmd supercommand.go:329 command finished
start: Job is already running: juju-agent-max-local
Bootstrap failed, destroying environment
ERROR exit status 1

ProblemType: Bug
DistroRelease: Ubuntu 14.10
Package: juju-core 1.20.5-0ubuntu1
ProcVersionSignature: Ubuntu 3.16.0-10.15-generic 3.16.1
Uname: Linux 3.16.0-10-generic x86_64
ApportVersion: 2.14.6-0ubuntu2
Architecture: amd64
CurrentDesktop: XFCE
Date: Wed Aug 27 11:19:11 2014
InstallationDate: Installed on 2013-05-31 (453 days ago)
InstallationMedia: Ubuntu 13.04 "Raring Ringtail" - Release amd64 (20130424)
SourcePackage: juju-core
SystemImageInfo: Error: [Errno 2] No such file or directory: 'system-image-cli'
UpgradeStatus: Upgraded to utopic on 2014-02-18 (190 days ago)

Revision history for this message
Max Brustkern (nuclearbob) wrote :
Revision history for this message
Martin Pitt (pitti) wrote :

I'm using juju-local with the default settings, i. e. without aufs, and get the same. After deploying two services, I get

$ juju status
environment: local
machines:
  "0":
    agent-state: started
    agent-version: 1.20.5.1
    dns-name: localhost
    instance-id: localhost
    series: utopic
    state-server-member-status: has-vote
  "5":
    agent-state: started
    agent-version: 1.20.5.1
    dns-name: 10.0.3.57
    instance-id: martin-local-machine-5
    series: trusty
    hardware: arch=amd64
  "6":
    agent-state: started
    agent-version: 1.20.5.1
    dns-name: 10.0.3.57
    instance-id: martin-local-machine-6
    series: trusty
    hardware: arch=amd64

$ sudo lxc-ls --fancy
NAME STATE IPV4 IPV6 GROUPS AUTOSTART
---------------------------------------------------------------------
juju-trusty-lxc-template STOPPED - - - NO
martin-local-machine-5 RUNNING 10.0.3.57 - - YES
martin-local-machine-6 RUNNING 10.0.3.57 - - YES

They both have the same IP address and thus their network is broken. I confirm that this is due to having the same MAC:

$ sudo lxc-attach -n martin-local-machine-5 ip a show dev eth0
9: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:16:3e:c4:4e:24 brd ff:ff:ff:ff:ff:ff
    inet 10.0.3.57/24 brd 10.0.3.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fec4:4e24/64 scope link
       valid_lft forever preferred_lft forever
0 martin@donald:~
$ sudo lxc-attach -n martin-local-machine-6 ip a show dev eth0
11: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:16:3e:c4:4e:24 brd ff:ff:ff:ff:ff:ff
    inet 10.0.3.57/24 brd 10.0.3.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fec4:4e24/64 scope link tentative dadfailed
       valid_lft forever preferred_lft forever

This breaks lxc-local pretty much completely.

Changed in juju-core (Ubuntu):
importance: Undecided → High
status: New → Confirmed
Revision history for this message
Martin Pitt (pitti) wrote :

For the record, both LXC configs have lxc.network.hwaddr = 00:16:3e:c4:4e:24, the exact same as juju-trusty-lxc-template.

Revision history for this message
Martin Pitt (pitti) wrote :

I believe that is a bug/regression in LXC itself. The lxc-clone manpage says

       -M, --keepmac
              Use the same MAC address as the original container, rather than generating a new random one.

If I clone an existing container

$ grep hwaddr /scratch/lxc/juju-trusty-lxc-template/config
lxc.network.hwaddr = 00:16:3e:c4:4e:24
$ sudo lxc-clone -o juju-trusty-lxc-template -n test

then the cloned container has the same MAC address instead of a random one:

$ grep hwaddr /scratch/lxc/test/config
lxc.network.hwaddr = 00:16:3e:c4:4e:24

affects: juju-core (Ubuntu) → lxc (Ubuntu)
Changed in lxc (Ubuntu):
assignee: nobody → Stéphane Graber (stgraber)
summary: - local provider creates all containers with the same MAC address
+ lxc-clone does not randomize MAC address - juju-local machines have same
+ IPs
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.