The steps outlined in the initial bug report reliably (100%) reproduce the problem for me on Ubuntu 16.04, it is tested in different Environments (1xAMD, ca. 10xIntel).
Here's the short way to get there:
(I have experimented quite a lot, and it boils down to the loop-mounted file system)
- Start the container via virsh or virt-manager
- Restart libvirtd
- Examine state of the container in virsh or virt-manager vs. the state of the loop device via losetup
The important parts are:
- The container is shown as stopped
- The container dosen't reply to network requests or console connection requests (i.e. it seems truly dead)
- The loop device doesn't show up in host-side "mount | grep loop"
- libvirtd allows to (re-)start the container, ending up with a double-mounted file system
Migrating to lxd is not feasable in many environments, in addition to that i am totally aware (and not critisizing!), that libvirt-lxc was/is unsupported. For me the real bug is, that this scenario is possible: If Ubuntu were to just exclude libvirt's lxc driver, that would be not really fine, but at least fool-proof.
The blocker to lxd adoption is not on the admin side (me), but on the end user side: Virt-manager is the favorite toy for SMB/NGO local admins, typically run via XQuartz on a Mac or XMing on Windows.
Please let me know, if and when I can be of further help - I am willing to test and have quite a few testbeds at hand, where I can easily create throw-away containers and ruin them. Since I tripped over this, I migrated around to have one node running no containers at every single customer, just to do exactly that.
The steps outlined in the initial bug report reliably (100%) reproduce the problem for me on Ubuntu 16.04, it is tested in different Environments (1xAMD, ca. 10xIntel).
Here's the short way to get there:
- Install a basic Ubuntu 16.04 Server AnyName< /name> >2097152< /memory> >2097152< /currentMemory> 'static' >4</vcpu> /machine< /partition> 64'>exe< /type> /sbin/init< /init> destroy< /on_poweroff> restart< /on_reboot> restart< /on_crash> 'passthrough' > path/to/ image.raw' /> '00:16: 3e:34:ea: 4b'/>
<char> /dev/net/ tun</char>
- apt-get install virt-manager (installing the GUI pulls in the heavy lifting components)
- create a libvirt/lxc container of something like
<domain type='lxc'>
<name>
<memory unit='KiB'
<currentMemory unit='KiB'
<vcpu placement=
<resource>
<partition>
</resource>
<os>
<type arch='x86_
<init>
</os>
<features>
<privnet/>
</features>
<clock offset='utc'/>
<on_poweroff>
<on_reboot>
<on_crash>
<devices>
<filesystem type='file' accessmode=
<driver type='loop' format='raw'/>
<source file='/
<target dir='/'/>
</filesystem>
<interface type='bridge'>
<mac address=
<source bridge='br1'/>
<target dev='vnet2'/>
<guest dev='eth0'/>
</interface>
<console type='pty' tty='/dev/pts/3'>
<source path='/dev/pts/3'/>
<target type='lxc' port='0'/>
<alias name='console0'/>
</console>
<hostdev mode='capabilities' type='misc'>
<source>
</source>
</hostdev>
</devices>
</domain>
(I have experimented quite a lot, and it boils down to the loop-mounted file system)
- Start the container via virsh or virt-manager
- Restart libvirtd
- Examine state of the container in virsh or virt-manager vs. the state of the loop device via losetup
The important parts are:
- The container is shown as stopped
- The container dosen't reply to network requests or console connection requests (i.e. it seems truly dead)
- The loop device doesn't show up in host-side "mount | grep loop"
- libvirtd allows to (re-)start the container, ending up with a double-mounted file system
Migrating to lxd is not feasable in many environments, in addition to that i am totally aware (and not critisizing!), that libvirt-lxc was/is unsupported. For me the real bug is, that this scenario is possible: If Ubuntu were to just exclude libvirt's lxc driver, that would be not really fine, but at least fool-proof.
The blocker to lxd adoption is not on the admin side (me), but on the end user side: Virt-manager is the favorite toy for SMB/NGO local admins, typically run via XQuartz on a Mac or XMing on Windows.
Please let me know, if and when I can be of further help - I am willing to test and have quite a few testbeds at hand, where I can easily create throw-away containers and ruin them. Since I tripped over this, I migrated around to have one node running no containers at every single customer, just to do exactly that.