Unable to start VMs under a lxc container
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
libvirt (Ubuntu) |
Invalid
|
High
|
Unassigned | ||
Vivid |
Fix Released
|
High
|
Unassigned | ||
Wily |
Invalid
|
High
|
Unassigned |
Bug Description
=======
1. Impact: cannot start vms inside a container
2. Devel solution: drop the cgmanager patches
3. Stable solution: same as devel solution
4. Test case: see comment #7 below for a very detailed and easy to use example
5. Regression potential: There should be no regressions, and the lp:qa-regression-tests passed.
=======
Our installer attempts to load VM's within an lxc container. This worked on trusty, however, with the latest releases from vivid and wily those VM's are unable to start. This affects running juju-local within a container as well.
We think it has something to do with cgmanager/cgproxy, however, we can manually run and launch qemu just fine within the container. This could potentially be an issue with libvirt attempt to set cgroup values for the qemu processes.
The current workaround to get a VM to start within a vivid and wily container running LXC 1.1.2+ is to disable cgroups in /etc/libvirt/
I'm starting this bug and will ask Ryan Harper to provide a more in-depth result of his debugging, but, wanted to have this on the radar.
Specs:
Host:
Vivid, kernel: 3.19.0-15-generic, lxc: 1.1.3+stable~
Container:
Vivid, kernel: 3.19.0-15-generic, lxc: 1.1.2-0ubuntu3.1, libvirt 1.2.12-0ubuntu14.1
tags: | added: cloud-installer |
Changed in libvirt (Ubuntu Vivid): | |
importance: | Undecided → High |
Changed in libvirt (Ubuntu Wily): | |
importance: | Undecided → High |
description: | updated |
tags: | removed: verification-failed |
Here is the bit of code we use to create a container with, it shells out to lxc-create passing in a custom userdata file:
``` /github. com/lxc/ lxc/issues/ 381 is "USE_LXC_ IMAGE_CACHE" ):
log. debug(" USE_LXC_ IMAGE_CACHE set, so not flushing in lxc-create")
flushflag = "" command_ output( me}'.format( name=name,
flushflag= flushflag,
userdatafilena me=userdata) )
"{0}".format( out['output' ]))
@classmethod
def create(cls, name, userdata):
""" creates a container from ubuntu-cloud template
"""
# NOTE: the -F template arg is a workaround. it flushes the lxc
# ubuntu template's image cache and forces a re-download. It
# should be removed after https:/
# resolved.
flushflag = "-F"
if os.getenv(
out = utils.get_
'sudo -E lxc-create -t ubuntu-cloud '
'-n {name} -- {flushflag} '
'-u {userdatafilena
if out['status'] > 0:
raise Exception("Unable to create container: "
return out['status']
```
We also setup a custom lxc config file:
``` container_ and_wait( self):
self.tasker. start_task( "Creating Container",
self. read_container_ status)
def create_
""" Creates container and waits for cloud-init to finish
"""
with open(os. path.join( self.container_ abspath, 'fstab'), 'w') as f:
f. write(" {0} {1} none bind,create= dir\n". format(
self. config. cfg_path,
'home/ ubuntu/ .cloud- install' ))
f. write(" /var/cache/ lxc var/cache/lxc none bind,create=dir\n")
charm_ plugin_ dir = self.config. getopt( 'charm_ plugin_ dir') cfg_path not in charm_plugin_dir:
plug_ dir = os.path.abspath(
self. config. getopt( 'charm_ plugin_ dir'))
plug_ base = os.path. basename( plug_dir)
f.write( "{d} home/ubuntu/{m} "
"none bind,create= dir\n". format( d=plug_ dir,
m=plug_ base))
# Detect additional charm plugins and make available to the
# container.
if charm_plugin_dir \
and self.config.
if extra_mounts:
for d in extra_mounts.
# update container config path.join( self.container_ abspath, 'config'), 'a') as f:
f. write(" lxc.mount. auto = cgroup:mixed\n"
" lxc.start. auto = 1\n"
" lxc.start. delay = 5\n"
" lxc.mount = {}/fstab\ n".format( self.container_ abspath) )
with open(os.
```
Here is the userdata.yaml file we output and pass into the container:
...