Linux containers not working on EC2

Bug #471615 reported by Matei Zaharia on 2009-11-02
20
This bug affects 3 people
Affects Status Importance Assigned to Milestone
lxc (Ubuntu)
Undecided
Unassigned

Bug Description

Binary package hint: lxc

Linux containers fail to start on Karmic EC2 instances. Here's how to reproduce the problem:

- Start up an instance of ami-5b1af932 (the current 32-bit EC2 AMI from http://uec-images.ubuntu.com/karmic/current/)
- Log in
- sudo apt-get update
- sudo apt-get install lxc
- Attempt to mount the cgroup file system (required for LXC), create a container, and run an application in it using the series of commands below:

ubuntu@domU-12-31-39-0A-D5-F6:~$ sudo mkdir /cgroup
ubuntu@domU-12-31-39-0A-D5-F6:~$ sudo mount -t cgroup cgroup /cgroup
ubuntu@domU-12-31-39-0A-D5-F6:~$ sudo lxc-create -n foo
ubuntu@domU-12-31-39-0A-D5-F6:~$ sudo lxc-execute -n foo /bin/echo hello

On the EC2 instances, the last command prints the following error:

lxc-execute: failed to clone(0x2c020000): Invalid argument
lxc-execute: Invalid argument - failed to fork into a new namespace
lxc-execute: failed to spawn '/usr/lib/lxc/lxc-init'

On a raw hardware install of Karmic server, the commands work fine and the lxc-execute prints "hello".

I think this may be due to kernel config differences. In particular, lxc-checkconfig shows a lot of things as disabled on EC2:

ubuntu@domU-12-31-39-0A-D5-F6:~$ lxc-checkconfig
Kernel config /proc/config.gz not found, looking in other places...
Found kernel config file /boot/config-2.6.31-302-ec2
--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: disabled
User namespace: disabled
Network namespace: disabled
Multiple /dev/pts instances: disabled

--- Control groups ---
Cgroup: enabled
Cgroup namespace: enabled
Cgroup device: disabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: enabled
Cgroup cpuset: enabled

--- Misc ---
Veth pair device: enabled
Macvlan: enabled
File capabilities: enabled

In contrast, on my raw hardware install, only "user namespace" and "cgroup device" are disabled.

Karoly Molnar (karoly-molnar) wrote :

The above described problem is still present for the nightly Lucid build. Is LXC support going to be enabled in Lucid on EC2?

edison (sudison) wrote :

cgroup devices is disabled on ubuntu 9.10 server. libvirt is not working, if I cgroup is mounted. libvirt will automatically check cgroup is enabled or not, then add device list into cgroup.devices. If it can't found cgroup.devices, then you can't start VM...

Dustin Kirkland  (kirkland) wrote :

<niemeyer> ubuntu@domU-12-31-39-05-58-28:~/chroot$ sudo schroot -c lucid-i386-source -u root
<niemeyer> (lucid-i386-source)root@domU-12-31-39-05-58-28:/home/ubuntu/chroot#
<niemeyer> Hey, it works :-)
<niemeyer> We need to update this: #471615 - Linux containers not working on EC2

Changed in lxc (Ubuntu):
status: New → Fix Released
status: Fix Released → New
Dustin Kirkland  (kirkland) wrote :

Sorry, rolling that back to "new". I think Gustavo used a schroot. However, I have heard earlier today that LXC now works in 10.04 images in EC2.

Gustavo, try the lxc commands above.

Dustin Kirkland  (kirkland) wrote :

<niemeyer> ubuntu@domU-12-31-39-05-58-28:~$ sudo lxc-execute -n foo /bin/echo hello
<niemeyer> hello
<niemeyer> Works either way

And now, it appears to actually work.

Changed in lxc (Ubuntu):
status: New → Fix Released
Gustavo Niemeyer (niemeyer) wrote :

And that too:

$ sudo lxc-execute -n foo /bin/echo hello
hello

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers