non-default lxc-dir breaks local provider
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
juju-core |
Fix Released
|
Low
|
Unassigned | ||
juju-core (Ubuntu) |
Fix Released
|
Undecided
|
Unassigned |
Bug Description
I tried to use the local provider according to https:/
I initially experienced that on my normal trusty workstation, but I reproduced in a current and pristine trusty cloud-image VM (I just ran "run-adt-test -sUl" from lp:auto-package-testing).
$ sudo apt-get install juju-local
Now set up LXC for a different container path:
$ sudo mkdir /srv/lxc
$ echo "lxc.lxcpath = /srv/lxc" | sudo tee /etc/lxc/lxc.conf
Note that this works just fine with all the LXC tools.
$ juju generate-config
Now create a different juju root:
$ sudo mkdir /srv/juju; sudo chown ubuntu:ubuntu /srv/juju
$ sed -i '/root-dir/ { s/# //; s_:.*$_: /srv/juju_ }' .juju/environme
$ juju switch local
$ juju bootstrap
Logging to /home/ubuntu/
Starting MongoDB server (juju-db-
Bootstrapping Juju machine agent
Starting Juju machine agent (juju-agent-
So far so good:
$ juju status
environment: local
machines:
"0":
agent-state: started
agent-version: 1.17.4.1
dns-name: localhost
instance-id: localhost
series: trusty
services: {}
There are no actual containers running (sudo lxc-ls --fancy), but that's the same as with using the default /var/lib/lxc/.
$ juju deploy mysql
Added charm "cs:precise/
After a while I see:
$ juju status
environment: local
machines:
"0":
agent-state: started
agent-version: 1.17.4.1
dns-name: localhost
instance-id: localhost
series: trusty
"1":
agent-
to receive response)'
instance-id: pending
series: precise
services:
mysql:
charm: cs:precise/mysql-36
exposed: false
relations:
cluster:
- mysql
units:
mysql/0:
machine: "1"
In .juju/local/
2014-03-11 15:48:49 WARNING juju.worker.
2014-03-11 15:49:16 ERROR juju.container.lxc lxc.go:134 container failed to start: error executing "lxc-start": command get_cgroup failed to receive response
2014-03-11 15:49:16 ERROR juju.provisioner provisioner_
(Attaching the whole file)
Other than that, .juju/local/log/ contains a broken symlink "all-machines.log -> /var/log/
Note that this works with the default /var/lib/lxc/ path.
Changed in juju-core: | |
milestone: | none → next-stable |
Changed in juju-core: | |
assignee: | nobody → Tim Penhey (thumper) |
summary: |
- fails to start container with local provider with non-default LXC path + template container contains user log mount in error |
Changed in juju-core: | |
assignee: | nobody → Jorge Niedbalski (niedbalski) |
affects: | juju (Ubuntu) → juju-core (Ubuntu) |
tags: | added: cts |
tags: |
added: cts-cloud-review local-provider removed: cts local |
tags: | added: cts |
tags: |
added: sts removed: cts |
Changed in juju-core: | |
assignee: | Jorge Niedbalski (niedbalski) → nobody |
tags: | removed: cts-cloud-review sts |
Changed in juju-core: | |
status: | Triaged → Fix Released |
Changed in juju-core (Ubuntu): | |
status: | Confirmed → Fix Released |
In that situation I see:
$ sudo lxc-ls --fancy ------- ------- ------- ------- ------- ------- ----- local-machine- 1 STOPPED - - YES
NAME STATE IPV4 IPV6 AUTOSTART
-------
ubuntu-
Trying to boot it manually reveals the problem:
$ sudo lxc-start -n ubuntu- local-machine- 1 ubuntu/ .juju/local/ log' on '/usr/lib/ x86_64- linux-gnu/ lxc/var/ log/juju' local-machine- 1' local-machine- 1'
lxc-start: No such file or directory - failed to mount '/home/
lxc-start: failed to setup the mount entries for 'ubuntu-
lxc-start: failed to setup the container
lxc-start: invalid sequence number 1. expected 2
lxc-start: failed to spawn 'ubuntu-
For testing I commented out the lxc.mount.entry for /home/ubuntu/ .juju/local/ log in /srv/lxc/ ubuntu- local-machine- 1/config, and tried to start the container again. It boots now, and starts cloud-init. I can't log in though, as ubuntu/ubuntu does not work and I don't know which account it created, so I stopped the container manually. At this point I don't know how to tell juju to re-poke that instance to continue the setup.
Supposedly this is due to some AppArmor restriction? I don't see any REJECTs in dmesg, but failing to bind-mount with a non-default container path sounds like that?