Comment 0 for bug 971621

Revision history for this message
Eric Dodemont (dodeeric) wrote :

When I launch more than one LXC instance, and I try to delete one (the first one for example), the wrong rootfs is umounted and disconnected from its nbd device (the last one for example).

E.g.

Before:

name status nbd --> rootfs_path veth_if --> IP cgroup
----------------------------------------------------------------------------------------------------------------------------
instance1 ACTIVE nbd15 --> .../instance-00000001/rootfs veth0 --> 10.0.0.2 V
instance2 ACTIVE nbd14 --> .../instance-00000002/rootfs veth1 --> 10.0.0.3 V
instance3 ACTIVE nbd13 --> .../instance-00000003/rootfs veth2 --> 10.0.0.4 V

nova delete instance1

After:

name status nbd --> rootfs_path veth_if --> IP cgroup
----------------------------------------------------------------------------------------------------------------------------
instance1 SHUTOFF nbd15 --> .../instance-00000001/rootfs X --> X X
instance2 ACTIVE nbd14 --> .../instance-00000002/rootfs veth1 --> 10.0.0.3 V
instance3 ACTIVE X --> X veth2 --> 10.0.0.4 V

Specifications:

- Host: KVM VM
- Host OS: ubuntu precise beta2 cloud image (from http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64.tar.gz)
- Guest OS: ubuntu precise beta2 cloud image (from http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64.tar.gz)
- OpenStack version: folsom-1 (but probably exact same behavior with essex-rc1) from github master branch with devstack
- Virtualization: LXC (LinuX Container)