Using virtio for block devices makes disks and partitions disappear in KVM/QEMU (using vmbuilder and libvirt)

Bug #517067 reported by Andreas Ntaflos
46
This bug affects 8 people
Affects Status Importance Assigned to Milestone
vm-builder (Ubuntu)
Fix Released
Medium
Unassigned
Lucid
Won't Fix
Medium
Unassigned

Bug Description

Binary package hint: libvirt-bin

I can reproduce that trying to use bus='virtio' for virtual disks in the definition of a libvirt domain makes virtual disks thus defined disappear in the eyes of KVM/QEMU. This leads to error messages like the following when booting, as well as disappearance of all partitions on that virtual disk, except for the root partition (that means especially that no swap space will be enabled):

One or more of the mounts listed in /etc/fstab cannot yet be mounted:
(ESC for recovery shell)
swap: waiting for /dev/sda2
/opt: waiting for /dev/sda3

A screenshot is available here: https://daff.pseudoterminal.org/misc/mountfailed.png

It also appears, just after starting the guest, that QEMU doesn't even recognize that there are hard disk drives present, it only sees the DVD drive. See the following screenshots for comparison.

bus='ide' in virtual disk domain definition: https://daff.pseudoterminal.org/misc/bus_ide.png
bus='virtio' in virtual disk domain definition: https://daff.pseudoterminal.org/misc/bus_virtio.png

The result is that only the root partition, probably because of initramfs/initrd and GRUB magic, will be available to the guest when using virtio, and only when no partitions other than a swap partition have been defined during setup or installation of the guest. When only a swap partition is present the boot process succeeds but still complains that it is waiting for the swap partition to become available. When there are any other partitions the boot process just hangs with the error message described above.

Background: I like to take advantage of the paravirtualized IO drivers offered for KVM-based guests. Using "virtio" as bus for block devices or model type for network interfaces in the right places in a libvirt domain definition enables those drivers. To facilitate and automate this when creating guests I edited the libvirt template for vmbuilder to automatically use virtio. Notice the target line:

#for $disk in $disks
    <disk type='file' device='disk'>
      <source file='$disk.filename' />
      <target dev='vd$disk.devletters()' bus='virtio' />
    </disk>
#end for

Now for creating a guest using vmbuilder, as per the Ubuntu Server Guide:

# vmbuilder kvm ubuntu --dest=/storage/vms/foo -m 2048 --rootsize=4000 --optsize=4000 --hostname=foo --suite=karmic --bridge=br0 --overwrite --libvirt=qemu:///system --flavour=virtual --arch=amd64 --mirror=http://host:9999/ubuntu

Here a root and an /opt partition are defined, both 4GB in size. Swap is defined automatically and defaults to 1GB. This works fine and results in the attached foo.xml libvirt domain definition. In particular the disk definition looks like this (again, notice the target line):

<disk type='file' device='disk'>
  <source file='/storage/vms/foo/disk0.qcow2'/>
  <target dev='vda' bus='virtio'/>
</disk>

This is what the libvirt and KVM documentation pages suggest. After starting this domain the results are as described above, i.e. either a hanging boot process or a missing swap partition.

When changing the target bus definition to bus='ide' everything works correctly. The domain boots as expected and contains all partitions defined during setup. This has been completely reproducible for me and I can switch between a working guest domain and a non-working by changing bus='virtio' to bus='ide' in the domain definition.

This is on a Ubuntu Server Edition installation. I have no idea if this is a libvirt problem or a problem with KVM or QEMU or something else entirely. More information follows and please also see the attached files. If there is anything else I can provide please tell me.

lsb_release -rd:
Description: Ubuntu 9.10
Release: 9.10

apt-cache policy libvirt-bin
libvirt-bin:
  Installed: 0.7.0-1ubuntu13.1
  Candidate: 0.7.0-1ubuntu13.1
  Version table:
 *** 0.7.0-1ubuntu13.1 0
        500 http://at.archive.ubuntu.com karmic-updates/main Packages
        100 /var/lib/dpkg/status
     0.7.0-1ubuntu13 0
        500 http://at.archive.ubuntu.com karmic/main Packages

uname -a:
Linux dhcp155 2.6.31-17-server #54-Ubuntu SMP Thu Dec 10 18:06:56 UTC 2009 x86_64 GNU/Linux

Tags: patch
Revision history for this message
Andreas Ntaflos (daff) wrote :
Revision history for this message
Scott Moser (smoser) wrote :

the mountall failed messages fail because there *is* no /dev/sdX* and your /etc/fstab says to mount them.

You've changed to /dev/vda* , /etc/fstab will need to be updated.

Revision history for this message
Andreas Ntaflos (daff) wrote :

Thanks for the comment, but how do you figure? I did not change anything after the fact, I modified the template used for installation and setup of the guest domain.

First, it seems to depend on the bus type whether the virtual disks are at all visible to QEMU, as can be seen from the screenshots. At that point logical device names are of no consequence, yet, because the kernel isn't even booted.

Second, it does not matter whether the disks are called "vda" or "hda" (the default) in the libvirt domain definition. From the documentation: "The dev attribute indicates the "logical" device name. The actual device name specified is not guaranteed to map to the device name in the guest OS. Treat it as a device ordering hint." This is further supported by the fact that upon switching the bus type to "ide" the disks are visible both to QEMU and to the booted guest Linux kernel but the logical device name in the domain definition is still "vda"-based.

Third, the logical device name is defined and set early during installation/setup of the guest since it looks at the modified template in /etc/vmbuilder/libvirt. So the installation procedure is aware that disks are named "vda" in the definition and takes that into account (or should take that into account). It does not matter whether it is "hda" oder "vda" there (I tried both, multiple times), /etc/fstab will always have "/dev/sdX*" set.

So I don't think that is the problem here, but I will try again. Thanks for the comment, again!

Revision history for this message
Scott Moser (smoser) wrote : Re: [Bug 517067] Re: Using virtio for block devices makes disks and partitions disappear in KVM/QEMU (using vmbuilder and libvirt)

On Fri, 5 Feb 2010, Andreas Ntaflos wrote:

> So I don't think that is the problem here, but I will try again. Thanks
> for the comment, again!

I think if you look inside your guest image you will see 'sdaX' in
/etc/fstab. Disks that are presented to the booted system as virtio will
be named 'vdX'. The discs didn't disappear, they're named differently
than your /etc/fstab, so they're (correctly) not mounted.

mount the guest image loopback, change sd -> vd and i think you'll boot
happily. Alternatively, i think you can open up
'./VMBuilder/plugins/ubuntu/feisty.py' and hack in:
- disk_prefix = 'sd'
+ disk_prefix = 'vd'

The bug could be considered to be in 1 of 2 places:
a.) vmbuilder: its 'install_fstab' routines only ever receive 'sd' for
their self.disk_prefix'
b.) linux/udev : you could suggest that the vdX devices should be
symlinked or other to sdX so that your fstab finds the devices independent
of how they're connected.

Chuck Short (zulcss)
affects: libvirt (Ubuntu) → vm-builder (Ubuntu)
Changed in vm-builder (Ubuntu):
importance: Undecided → Medium
status: New → Confirmed
Revision history for this message
Andreas Ntaflos (daff) wrote :

You are right of course, /etc/fstab on the guest contains "/dev/sdXY" entries. These get generated by /usr/share/pyshared/VMBuilder/plugins/ubuntu/feisty.py as you mentioned. After changing all "/dev/sdXY" to "/dev/vdXY" I can indeed boot happily :)

What threw me off was that QEMU did not (and does not) see these virtio-based drives when booting (see screenshots) and that changing the target device name in the domain definition (<target dev='vda' bus='virtio'/> to <target dev='sda' bus='virtio'/> and vice versa) does nothing. I suppose this has to do with the snippet of documentation I quoted previously and probably with the fact that the virtio driver is paravirtualized so QEMU doesn't do any of the emulation it would do with IDE- or SCSI-based buses. Should have thought of that earlier.

So creating guests in the manner I described in my original post using vmbuilder works after applying your one-line fix to /usr/share/pyshared/VMBuilder/plugins/ubuntu/feisty.py.

I am not sure where this bug lies since I am just getting to know the more advanced aspects of libvirt but I suppose vmbuider is to "blame". That's understandable given that it is still somewhat young and cannot consider all possible combinations and varieties of libvirt options. Too bad virt-install is mostly useless for quickly creating and deploying guests for testing purposes.

I like your solution b.) but doesn't that add yet another level of indirection and complicates guest setup even more?

Soren Hansen (soren)
Changed in vm-builder (Ubuntu):
milestone: none → ubuntu-10.04
Revision history for this message
Paul Sladen (sladen) wrote :

Surely the problem here, is that for the standard method of mount-by-UUID is being overridden?

If mount-by-UUID is in use then it doesn't matter whether a device shows up as /dev/{sd,hd,hv}*

Thierry Carrez (ttx)
Changed in vm-builder (Ubuntu Lucid):
milestone: ubuntu-10.04 → none
Revision history for this message
seph (seph) wrote :

I agree with Paul Sladen -- if this used the UUID mechanism it would be simpler to change. That's also what most of the kvm docs indicate. So why isn't vmbuilder using it?

Revision history for this message
Will Bryant (willbryant) wrote :

Yeah, UUID seems like a better idea.

Revision history for this message
Tinco Andringa (tinco-andringa) wrote :

Hi guys,

I'm stuck on this, so I've made a patch. I'm not sure how to test it, perhaps someone give me some pointers? I tried out this bzr thing and it's super crazy. I can't find out how to fork and submit a pull request. I browsed through the commands of bzr and there was some launchpad stuff but none of it worked, either 500 exceptions or plain nothing happens. Then I found out I can't even few the patch after it was committed, but I could do send with -o to a file, so that's what I'm attaching.

Sorry for whining :) hopefully we can work out if this patch is going to fix this issue.

Revision history for this message
Ubuntu Foundations Team Bug Bot (crichton) wrote :

The attachment "patch" seems to be a patch. If it isn't, please remove the "patch" flag from the attachment, remove the "patch" tag, and if you are a member of the ~ubuntu-reviewers, unsubscribe the team.

[This is an automated message performed by a Launchpad user owned by ~brian-murray, for any issues please contact him.]

tags: added: patch
Revision history for this message
Tinco Andringa (tinco-andringa) wrote :

Is anyone watching this issue?

Revision history for this message
Rolf Leggewie (r0lf) wrote :

lucid has seen the end of its life and is no longer receiving any updates. Marking the lucid task for this ticket as "Won't Fix".

Changed in vm-builder (Ubuntu Lucid):
status: Confirmed → Won't Fix
Revision history for this message
Tinco Andringa (tinco-andringa) wrote :

What has Ubuntu Lucid to do with this bug? Isn't it in any version of Ubuntu that ships with this software?

Revision history for this message
Tinco Andringa (tinco-andringa) wrote :
Changed in vm-builder (Ubuntu):
status: Confirmed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Patches

Remote bug watches

Bug watches keep track of this bug in other bug trackers.