After adding more scsi disks for Aarch64 virtual machine, start the VM and got Qemu Error

Bug #1594239 reported by Kevin Zhao
12
This bug affects 1 person
Affects Status Importance Assigned to Milestone
QEMU
Fix Released
Undecided
Unassigned

Bug Description

Description
===========
Using virt-manager to create a VM in Aarch64, Ubuntu 16.04.
Add scsi disk to the VM. After add four or more scsi disks, start the VM and will got Qemu error.

Steps to reproduce
==================
1.Use virt-manager to create a VM.
2.After the VM is started, add scsi disk to the VM. They will be allocated to "sdb,sdc,sdd....." .
3.If we got a disk name > sdg, virt-manager will also assign a virtio-scsi controller for this disk.And the VM will be shutdown.
4.Start the VM, will see the error log.

Expected result
===============
Start the vm smoothly.The added disks can work.

Actual result
=============
Got the error:
starting domain: internal error: process exited while connecting to monitor: qemu-system-aarch64: /build/qemu-zxCwKP/qemu-2.5+dfsg/migration/savevm.c:620: vmstate_register_with_alias_id: Assertion `!se->compat || se->instance_id == 0' failed.
details=Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/asyncjob.py", line 90, in cb_wrapper
    callback(asyncjob, *args, **kwargs)
  File "/usr/share/virt-manager/virtManager/asyncjob.py", line 126, in tmpcb
    callback(*args, **kwargs)
  File "/usr/share/virt-manager/virtManager/libvirtobject.py", line 83, in newfn
    ret = fn(self, *args, **kwargs)
  File "/usr/share/virt-manager/virtManager/domain.py", line 1402, in startup
    self._backend.create()
  File "/usr/local/lib/python2.7/dist-packages/libvirt.py", line 1035, in create
    if ret == -1: raise libvirtError ('virDomainCreate() failed', dom=self)
libvirtError: internal error: process exited while connecting to monitor: qemu-system-aarch64: /build/qemu-zxCwKP/qemu-2.5+dfsg/migration/savevm.c:620: vmstate_register_with_alias_id: Assertion `!se->compat || se->instance_id == 0' failed.

Environment
===========
1. virt-manager version is 1.3.2

2. Which hypervisor did you use?
    Libvirt+KVM
    $ kvm --version
    QEMU emulator version 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.1), Copyright (c) 2003-2008 Fabrice Bellard
    $ libvirtd --version
    libvirtd (libvirt) 1.3.1

3. Which storage type did you use?
   In the host file system,all in one physics machine.
stack@u202154:/opt/stack/nova$ df -hl
Filesystem Size Used Avail Use% Mounted on
udev 7.8G 0 7.8G 0% /dev
tmpfs 1.6G 61M 1.6G 4% /run
/dev/sda2 917G 41G 830G 5% /
tmpfs 7.9G 0 7.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
/dev/sda1 511M 888K 511M 1% /boot/efi
cgmfs 100K 0 100K 0% /run/cgmanager/fs
tmpfs 1.6G 0 1.6G 0% /run/user/1002
tmpfs 1.6G 0 1.6G 0% /run/user/1000
tmpfs 1.6G 0 1.6G 0% /run/user/0

4. Environment information:
   Architecture : AARCH64
   OS: Ubuntu 16.04

The Qemu commmand of libvirt is :
2016-06-20 02:39:46.561+0000: starting up libvirt version: 1.3.1, package: 1ubuntu10 (William Grant <email address hidden> Fri, 15 Apr 2016 12:08:21 +1000), qemu version: 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.1), hostname: u202154
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/kvm -name cent7 -S -machine virt,accel=kvm,usb=off -cpu host -drive file=/usr/share/edk2.git/aarch64/QEMU_EFI-pflash.raw,if=pflash,format=raw,unit=0,readonly=on -drive file=/var/lib/libvirt/qemu/nvram/cent7_VARS.fd,if=pflash,format=raw,unit=1 -m 2048 -realtime mlock=off -smp 2,sockets=2,cores=1,threads=1 -uuid d5462bb6-159e-4dbd-9266-bf8c07fa1695 -nographic -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-cent7/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -device i82801b11-bridge,id=pci.1,bus=pcie.0,addr=0x1 -device pci-bridge,chassis_nr=2,id=pci.2,bus=pci.1,addr=0x1 -device virtio-scsi-device,id=scsi0 -device lsi,id=scsi1 -device lsi,id=scsi2 -device virtio-scsi-device,id=scsi3 -usb -drive file=/var/lib/libvirt/images/cent7-2.img,format=qcow2,if=none,id=drive-scsi0-0-0-0 -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0,bootindex=1 -drive if=none,id=drive-scsi0-0-0-1,readonly=on -device scsi-cd,bus=scsi0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi0-0-0-1,id=scsi0-0-0-1 -drive file=/var/lib/libvirt/images/cent7-10.img,format=qcow2,if=none,id=drive-scsi0-0-0-2 -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=2,drive=drive-scsi0-0-0-2,id=scsi0-0-0-2 -drive file=/var/lib/libvirt/images/cent7-11.img,format=qcow2,if=none,id=drive-scsi0-0-0-3 -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=3,drive=drive-scsi0-0-0-3,id=scsi0-0-0-3 -drive file=/var/lib/libvirt/images/cent7-13.img,format=qcow2,if=none,id=drive-scsi3-0-0-0 -device scsi-hd,bus=scsi3.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi3-0-0-0,id=scsi3-0-0-0 -netdev tap,fd=33,id=hostnet0,vhost=on,vhostfd=35 -device virtio-net-device,netdev=hostnet0,id=net0,mac=52:54:00:a1:6e:75 -serial pty -msg timestamp=on
Domain id=11 is tainted: host-cpu

The libvirt xml is:
<domain type='kvm'>
  <name>cent7</name>
  <uuid>d5462bb6-159e-4dbd-9266-bf8c07fa1695</uuid>
  <memory unit='KiB'>2097152</memory>
  <currentMemory unit='KiB'>2097152</currentMemory>
  <vcpu placement='static'>2</vcpu>
  <os>
    <type arch='aarch64' machine='virt'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/edk2.git/aarch64/QEMU_EFI-pflash.raw</loader>
    <nvram>/var/lib/libvirt/qemu/nvram/cent7_VARS.fd</nvram>
    <boot dev='hd'/>
  </os>
  <cpu mode='host-passthrough'/>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/bin/kvm</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/cent7-2.img'/>
      <target dev='sda' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <target dev='sdb' bus='scsi'/>
      <readonly/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/cent7-10.img'/>
      <target dev='sdc' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/cent7-11.img'/>
      <target dev='sdd' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='3'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/cent7-13.img'/>
      <target dev='sdv' bus='scsi'/>
      <address type='drive' controller='3' bus='0' target='0' unit='0'/>
    </disk>
    <controller type='scsi' index='0' model='virtio-scsi'>
      <address type='virtio-mmio'/>
    </controller>
    <controller type='scsi' index='1'>
      <address type='virtio-mmio'/>
    </controller>
    <controller type='scsi' index='2'>
      <address type='virtio-mmio'/>
    </controller>
    <controller type='scsi' index='3' model='virtio-scsi'>
      <address type='virtio-mmio'/>
    </controller>
    <controller type='pci' index='0' model='pcie-root'/>
    <controller type='pci' index='1' model='dmi-to-pci-bridge'>
      <model name='i82801b11-bridge'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
    </controller>
    <controller type='pci' index='2' model='pci-bridge'>
      <model name='pci-bridge'/>
      <target chassisNr='2'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x01' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:a1:6e:75'/>
      <source bridge='br0'/>
      <model type='virtio'/>
      <address type='virtio-mmio'/>
    </interface>
    <serial type='pty'>
      <target port='0'/>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
    </console>
  </devices>
</domain>

Revision history for this message
Peter Maydell (pmaydell) wrote :

This turns out to be a bug in libvirt, fixed in 1.3.4 or later; see the discussion here:
https://lists.gnu.org/archive/html/qemu-devel/2016-06/msg07217.html
I'm going to close this as 'not a bug', since it's not a bug in QEMU proper.

Changed in qemu:
status: New → Invalid
Revision history for this message
Kevin Zhao (kevin-zhao) wrote :

Hi All,
     Libvirt fixed in 1.3.4 is about setting machine arch for qemu-2.6, not fix this bug. I have also reproduce in the Qemu2.6 and newest version of libvirt. See procedure as below:
With the Qemu 2.6.50 and libvirt(commit 03ce1328086d6937d2647d616efff29941a3e80a):
I find that the problem that I have met before occurs again. I can reproduce it.
     1. After launching a VM with fedora23(for example), the xml is f23.xml in attachment.
     2. Then use qemu-img command to generate a qemu disk f23-2.qcow2 and f23-3.qcow2
     3. Add f23-2.qcow2 as sdc.
     $ ./virsh attach-device f23 /root/sdc.xml
sdc.xml :
    <disk type="file" device="disk">
       <driver name="qemu" type="qcow2"/>
      <source file="/var/lib/libvirt/images/f23-2.qcow2"/>
      <target dev="sdc" bus="scsi"/>
    </disk>
     Then in the Guest f23, we can see it takes effect immediately.

     4. Add f23-3.qcow2 as sdh , also add virtio-scsi controller for sdh.
      $ ./virsh edit f23
      add this below
      <controller type="scsi" index="1" model="virtio-scsi"/>
        <disk type="file" device="disk">
      <driver name="qemu" type="qcow2"/>
      <source file="/var/lib/libvirt/images/f23-4.qcow2"/>
      <target dev="sdh" bus="scsi"/>
    </disk>
      $ ./virsh destory f23 && ./.virsh start f23
Got the error:
2016-06-28 11:37:17.017+0000: 6329: warning : qemuDomainObjTaint:3227 : Domain id=15 name='f23' uuid=e2de65f4-5d9a-4b90-a56a-ae40f4763aec is tainted: high-privileges
2016-06-28 11:37:17.017+0000: 6329: warning : qemuDomainObjTaint:3227 : Domain id=15 name='f23' uuid=e2de65f4-5d9a-4b90-a56a-ae40f4763aec is tainted: host-cpu
2016-06-28 11:37:28.546+0000: 6313: error : qemuMonitorIORead:583 : Unable to read from monitor: Connection reset by peer
2016-06-28 11:37:28.546+0000: 6313: error : qemuProcessReportLogError:1815 : internal error: qemu unexpectedly closed the monitor: qemu-system-aarch64: /opt/stack/kevin/qemu/migration/savevm.c:615: vmstate_register_with_alias_id: Assertion `!se->compat || se->instance_id == 0' failed.

So this bug seems exist with new qemu and new libvirt.

Changed in qemu:
status: Invalid → Confirmed
Revision history for this message
Cole Robinson (crobinso) wrote :

This seems to be a minimal reproducer:

qemu-system-aarch64 \
  -machine virt-2.6,accel=tcg \
  -nodefaults \
  -no-user-config \
  -nographic -monitor stdio \
  -device virtio-scsi-device,id=scsi0 \
  -device virtio-scsi-device,id=scsi1 \
  -drive file=foo.img,format=raw,if=none,id=d0 \
  -device scsi-hd,bus=scsi0.0,drive=d0 \
  -drive file=foo.img,format=raw,if=none,id=d1 \
  -device scsi-hd,bus=scsi1.0,drive=d1

Revision history for this message
Dr. David Alan Gilbert (dgilbert-h) wrote :
Download full text (3.6 KiB)

I'm not 100% sure but I think the problem is we've got two devices with the same qdev path/name and we aren't expecting that when they're children of another device; here's some debug on the aarch64 version:

(qemu) vmstate_register_with_alias_id: vmsd=cpu_common
vmstate_register_with_alias_id: se->compat=(nil) instance_id=0
vmstate_register_with_alias_id: vmsd=cpu
vmstate_register_with_alias_id: se->compat=(nil) instance_id=0
vmstate_register_with_alias_id: vmsd=pflash_cfi01
vmstate_register_with_alias_id: se->compat=(nil) instance_id=0
vmstate_register_with_alias_id: vmsd=pflash_cfi01
vmstate_register_with_alias_id: se->compat=(nil) instance_id=1
vmstate_register_with_alias_id: vmsd=arm_gic
vmstate_register_with_alias_id: se->compat=(nil) instance_id=0
vmstate_register_with_alias_id: vmsd=pl011
vmstate_register_with_alias_id: se->compat=(nil) instance_id=0
vmstate_register_with_alias_id: vmsd=pl031
vmstate_register_with_alias_id: se->compat=(nil) instance_id=0
vmstate_register_with_alias_id: vmsd=gpex_root
vmstate_register_with_alias_id: dev/id case: 0000:00:00.0
vmstate_register_with_alias_id: se->compat=0x55d911656fa0 instance_id=0
vmstate_register_with_alias_id: vmsd=PCIBUS
vmstate_register_with_alias_id: se->compat=(nil) instance_id=0
vmstate_register_with_alias_id: vmsd=pl061
vmstate_register_with_alias_id: se->compat=(nil) instance_id=0
vmstate_register_with_alias_id: vmsd=gpio-key
vmstate_register_with_alias_id: se->compat=(nil) instance_id=0
vmstate_register_with_alias_id: vmsd=fw_cfg
vmstate_register_with_alias_id: se->compat=(nil) instance_id=0
vmstate_register_with_alias_id: vmsd=scsi-disk
vmstate_register_with_alias_id: dev/id case: 0:0:0
vmstate_register_with_alias_id: se->compat=0x55d912077020 instance_id=0
vmstate_register_with_alias_id: vmsd=scsi-disk
vmstate_register_with_alias_id: dev/id case: 0:0:0
vmstate_register_with_alias_id: se->compat=0x55d912078c90 instance_id=1
qemu-system-aarch64: /home/dgilbert/git/qemu/migration/savevm.c:624: vmstate_register_with_alias_id: Assertion `!se->compat || se->instance_id == 0' failed.

I think the problem is the names of the virtio-scsi devices - on pci I'd expect to have a PCI bus name for the name of the virtio interface, so that they're pci slot/scsi id; on x86 it won't let me instantiate a virtio-scsi-device - because I don't have a virtio bus, so I have to ask for a virtio-scsi-pci device and then I end up with unique names like:

vmstate_register_with_alias_id: dev/id case: 0000:00:01.3
vmstate_register_with_alias_id: se->compat=0x564cdb891a60 instance_id=0
vmstate_register_with_alias_id: vmsd=scsi-disk
vmstate_register_with_alias_id: dev/id case: 0000:00:02.0/0:0:0
vmstate_register_with_alias_id: se->compat=0x564cdb95ec30 instance_id=0
vmstate_register_with_alias_id: vmsd=scsi-disk
vmstate_register_with_alias_id: dev/id case: 0000:00:03.0/0:0:0

going back to aarch64 and doing an info qtree we have:

  dev: virtio-mmio, id ""
    gpio-out "sysbus-irq" 1
    mmio 000000000a000800/0000000000000200
    bus: virtio-mmio-bus.4
      type virtio-mmio-bus
  dev: virtio-mmio, id ""
    gpio-out "sysbus-irq" 1
    mmio 000000000a000600/0000000000000200
    bus: virtio-mmio-bus.3
      ...

Read more...

Revision history for this message
Tom Hanson (thomas-hanson) wrote :

Dave,

How did you get the debug info in #4 above?

I can now replicate the error, but can't get to the monitor. I'm new to qemu so it's probably one of those things I haven't learned yet.

Thanks,
Tom

Revision history for this message
Tom Hanson (thomas-hanson) wrote :

Dave,

Yeah, well, never mind. :-)

If I'd looked at the code first I'd have seen the function name and thought about which data I'd want to dump and where, I'd have figured out where the debug data came from.

-Tom

Revision history for this message
Dr. David Alan Gilbert (dgilbert-h) wrote :

Hi Tom,
  Yeh it's just vmstate_register_with_alias_id printing vmsd->name at entry,
and then after the char *id = .... printing that as well (that's what I labelled as the dev/id case).
Then just before the assert I was printing the se->compat and se->instance_id values.

I noticed this bug because one of our test team had hit the same assert a few weeks back on x86, but it was on a truly bizarre setup (~50 nested PCIe bridges) so I knew where to look for it.

I think the idea is that if you have a se->compat string then it had better be unique (that is instance_id == 0); and the compat string is formed by concatenation of the qdev path and the name of this device. Then we have '0.0.0' as the name of this scsi device (i.e. local to this SCSI adapter) but no path that gives a unique string for the adapter like we do on the x86.

Dave

Revision history for this message
Tom Hanson (thomas-hanson) wrote : Re: [Bug 1594239] Re: After adding more scsi disks for Aarch64 virtual machine, start the VM and got Qemu Error

Thanks! That makes sense. But, off the cuff, it seems odd that there's an instance_id if it can only be zero. But then again, it may be overloaded or be applicable in other cases. I'll dig into the code today.

On 07/01/2016 02:27 AM, Dr. David Alan Gilbert wrote:
> Hi Tom,
> Yeh it's just vmstate_register_with_alias_id printing vmsd->name at entry,
> and then after the char *id = .... printing that as well (that's what I labelled as the dev/id case).
> Then just before the assert I was printing the se->compat and se->instance_id values.
>
> I noticed this bug because one of our test team had hit the same assert
> a few weeks back on x86, but it was on a truly bizarre setup (~50 nested
> PCIe bridges) so I knew where to look for it.
>
> I think the idea is that if you have a se->compat string then it had
> better be unique (that is instance_id == 0); and the compat string is
> formed by concatenation of the qdev path and the name of this device.
> Then we have '0.0.0' as the name of this scsi device (i.e. local to this
> SCSI adapter) but no path that gives a unique string for the adapter
> like we do on the x86.
>
> Dave
>

Revision history for this message
Dr. David Alan Gilbert (dgilbert-h) wrote :

Yeh I *think* the idea is that you either:
     a) have an instance_id
or
     b) have a unique name
         in which case you're also allowed to have an old compatibility name/instance_id to work with old code that didn't have a unique name (that's in se->compat)

so the assert is:
       assert(!se->compat || se->instance_id == 0);

 The !se->compat corresponds to (a)
     se->instance_id == 0 corresponds to (b)

Having a unique name is a very good idea for hotplug - it lets you unplug the middle one and still receive a migration correctly.

Dave

Revision history for this message
Tom Hanson (thomas-hanson) wrote :
Download full text (10.5 KiB)

We may be saying the same thing, but I'd word it differently. If a
 "device" has a "path" then it gets a se->compat (compatibility?) record.
   - Within that record each device gets an instance_id value based on its
name. Multiple IDs for the same name are allowed.
   - At the "se" level each device also gets an instance id but now based
on path + name. There can only be one instance for that combination which
requires that the path must be unique for each device name.

In this case both SCSI device have the path "0:0:0" (chan:id:lun) which
violates the above requirement.

Looking at the debug info I noticed that for "virtio-net" the (PCI) path is
not all zeroes (0000:00:01.0). Makes me wonder if maybe something on the
SCSI side of things should be generating valid paths.

Still digging.

On 1 July 2016 at 09:08, Dr. David Alan Gilbert <email address hidden> wrote:

> Yeh I *think* the idea is that you either:
> a) have an instance_id
> or
> b) have a unique name
> in which case you're also allowed to have an old compatibility
> name/instance_id to work with old code that didn't have a unique name
> (that's in se->compat)
>
> so the assert is:
> assert(!se->compat || se->instance_id == 0);
>
> The !se->compat corresponds to (a)
> se->instance_id == 0 corresponds to (b)
>
> Having a unique name is a very good idea for hotplug - it lets you
> unplug the middle one and still receive a migration correctly.
>
> Dave
>
> --
> You received this bug notification because you are subscribed to the bug
> report.
> https://bugs.launchpad.net/bugs/1594239
>
> Title:
> After adding more scsi disks for Aarch64 virtual machine, start the VM
> and got Qemu Error
>
> Status in QEMU:
> Confirmed
>
> Bug description:
> Description
> ===========
> Using virt-manager to create a VM in Aarch64, Ubuntu 16.04.
> Add scsi disk to the VM. After add four or more scsi disks, start the VM
> and will got Qemu error.
>
> Steps to reproduce
> ==================
> 1.Use virt-manager to create a VM.
> 2.After the VM is started, add scsi disk to the VM. They will be
> allocated to "sdb,sdc,sdd....." .
> 3.If we got a disk name > sdg, virt-manager will also assign a
> virtio-scsi controller for this disk.And the VM will be shutdown.
> 4.Start the VM, will see the error log.
>
>
> Expected result
> ===============
> Start the vm smoothly.The added disks can work.
>
> Actual result
> =============
> Got the error:
> starting domain: internal error: process exited while connecting to
> monitor: qemu-system-aarch64:
> /build/qemu-zxCwKP/qemu-2.5+dfsg/migration/savevm.c:620:
> vmstate_register_with_alias_id: Assertion `!se->compat || se->instance_id
> == 0' failed.
> details=Traceback (most recent call last):
> File "/usr/share/virt-manager/virtManager/asyncjob.py", line 90, in
> cb_wrapper
> callback(asyncjob, *args, **kwargs)
> File "/usr/share/virt-manager/virtManager/asyncjob.py", line 126, in
> tmpcb
> callback(*args, **kwargs)
> File "/usr/share/virt-manager/virtManager/libvirtobject.py", line 83,
> in newfn
> ret = fn(self, *args, **kwargs)
> File "...

Revision history for this message
Tom Hanson (thomas-hanson) wrote :
Download full text (4.8 KiB)

This looks like a command line / configuration issue which results in a name collision as Dave predicted above.

I had to piece this together out of bits of information since documentation is a bit sparse but the following works. Note the explicit ID and LUN values on the -device declarations:
sudo qemu-system-aarch64 -enable-kvm -machine virt -cpu host -machine type=virt -nographic -smp 1 -m 2048 -kernel aarch64-linux-3.15rc2-buildroot.img --append "console=ttyAMA0" \
  -device virtio-scsi-device,id=scsi0 \
  -device virtio-scsi-device,id=scsi1 \
  -drive file=scsi_1.img,format=raw,if=none,id=d0 \
  -device scsi-hd,bus=scsi0.0,scsi-id=0,lun=0,drive=d0 \
  -drive file=scsi_2.img,format=raw,if=none,id=d1 \
  -device scsi-hd,bus=scsi1.0,scsi-id=0,lun=1,drive=d1

Added debug shows the following (Note the LUN value of 1 for the second drive):
calculate_compat_instance_id: For [scsi-disk], Init instance_id to [0]
calculate_new_instance_id: For [0:0:0/scsi-disk], Init instance_id to [0]
calculate_compat_instance_id: For [scsi-disk], Init instance_id to [0]
calculate_compat_instance_id: Found match for [scsi-disk], incrementing instance_id is now [1]
calculate_new_instance_id: For [0:0:1/scsi-disk], Init instance_id to [0]

Note: even though it's on a different bus, specifying the same id & lun will cause a collision.

If desired, the above can be simplified to use a single bus:
sudo qemu-system-aarch64 -enable-kvm -machine virt -cpu host -machine type=virt -nographic -smp 1 -m 2048 -kernel aarch64-linux-3.15rc2-buildroot.img --append "console=ttyAMA0" \
  -device virtio-scsi-device,id=scsi0 \
  -drive file=scsi_1.img,format=raw,if=none,id=d0 \
  -device scsi-hd,bus=scsi0.0,scsi-id=0,lun=0,drive=d0 \
  -drive file=scsi_2.img,format=raw,if=none,id=d1 \
  -device scsi-hd,bus=scsi0.0,scsi-id=0,lun=1,drive=d1

Searching the web, I saw this more commonly done with virtio-scsi-pci instead of virtio-scsi-device (but I can't tell you why):
sudo qemu-system-aarch64 -enable-kvm -machine virt -cpu host -machine type=virt -nographic -smp 1 -m 2048 -kernel aarch64-linux-3.15rc2-buildroot.img --append "console=ttyAMA0" \
  -device virtio-scsi-pci,id=scsi0 \
  -device virtio-scsi-pci,id=scsi1 \
  -drive file=scsi_1.img,format=raw,if=none,id=d0 \
  -device scsi-hd,bus=scsi0.0,scsi-id=0,lun=0,drive=d0 \
  -drive file=scsi_2.img,format=raw,if=none,id=d1 \
  -device scsi-hd,bus=scsi1.0,scsi-id=0,lun=1,drive=d1

Note that the name used internally now includes the bus id:
calculate_compat_instance_id: For [scsi-disk], Init instance_id to [0]
calculate_new_instance_id: For [0000:00:02.0/0:0:0/scsi-disk], Init instance_id to [0]
calculate_compat_instance_id: For [scsi-disk], Init instance_id to [0]
calculate_compat_instance_id: Found match for [scsi-disk], incrementing instance_id is now [1]
calculate_new_instance_id: For [0000:00:03.0/0:0:1/scsi-disk], Init instance_id to [0]

This means that it is now possible to use the same LUN for the drives on the 2 different buses:
sudo qemu-system-aarch64 -enable-kvm -machine virt -cpu host -machine type=virt -nographic -smp 1 -m 2048 -kernel aarch64-linux-3.15rc2-buildroot.img --append "console=ttyAMA0" \
  -device virtio-sc...

Read more...

Revision history for this message
Laszlo Ersek (Red Hat) (lersek) wrote :

The issue is that virtio-mmio devices don't distinguish themselves on the sysbus level.

Using the small reproducer from Cole, and setting a breakpoint on virtio_bus_get_dev_path(), for the second virtio-scsi-device we get:

    ...
      qdev_get_dev_path()
        virtio_bus_get_dev_path()
          qdev_get_dev_path()

In virtio_bus_get_dev_path(), "bus" is set to "virtio-mmio-bus.30" ("name" field). "proxy" is set to the VirtIOMMIOProxy object that produces (owns) "virtio-mmio-bus.30" as its "bus" field. Finally virtio_bus_get_dev_path() forwards the "get dev path" request to this proxy object.

In that second qdev_get_dev_path() call, the parent bus for the VirtIOMMIOProxy object is "main-system-bus" (proxy->parent_bus->name).

The bus class for the main system bus is set up in system_bus_class_init(), and that function only sets the "print_dev" and "get_fw_dev_path" member functions. The "get_dev_path" member function is left NULL, hence qdev_get_dev_path() will return NULL ultimately.

Thus, in the "dev/id" case, in your debug patch's terminology, "id" will be NULL, and the inner concatenation branch will not be taken. Only vmsd->name will be copied into se->idstr.

This can be fixed in two ways, I believe.

* First, we could implement sysbus_get_dev_path(), and set it in system_bus_class_init(). This function would closely follow sysbus_get_fw_dev_path(): format the address of the first MMIO or IO region of the device to a string. For virtio-mmio devices in particular, this would be suitable, because they have a single (0x200 size) memory region (see VirtIOMMIOProxy.iomem), and the base address of that memory region is precisely what distinguishes the virtio-mmio transports from each other.

* Secondly, we could override virtio_bus_get_dev_path() in virtio_mmio_bus_class_init(), as the "get_dev_path" member function of the TYPE_VIRTIO_MMIO_BUS class. Implementing virtio_mmio_bus_get_dev_path() would result in the first qdev_get_dev_path() listed above to call our specialized code.

In virtio_mmio_bus_get_dev_path(), we could open-code the first two steps of virtio_bus_get_dev_path():

    BusState *bus = qdev_get_parent_bus(dev);
    DeviceState *proxy = DEVICE(bus->parent);

Here we'd know that "proxy" (a VirtIOMMIOProxy object) is actually derived from SysBusDevice, and we could dynamic-cast it accordingly. In the SysBusDevice, we could access mmio[0].addr directly (it's a "public" field, and we know virtio-mmio transports have exactly one MMIO region and no IO regions), and format that.

The small issue with both alternatives is that they'd immediately break migration between pre-patch and post-patch, because (IIUC) these paths get formatted into migration section headers or some such. So either change would require introducing new machine type versions for *all* target arches and machine types that are currently versioned. This is the reason I'm not trying to prototype either idea above -- just gathering all those machine types looks daunting.

Revision history for this message
Tom Hanson (thomas-hanson) wrote :

As noted above, virtio-scsi-pci uses a bus address as part of the internal ID string while virtio-scsi-device does not.

  * Is this difference intentional?

  * Are they intended to support different use cases? If so, what?

Revision history for this message
Dr. David Alan Gilbert (dgilbert-h) wrote :

> As noted above, virtio-scsi-pci uses a bus address as part of the internal ID string while
> virtio-scsi-device does not.
>
> * Is this difference intentional?
>
> * Are they intended to support different use cases? If so, what?

I don't think it is intentional; I think every SCSI device is expecting to have
a globally unique name; it's got a unique name on it's adapter (in terms of bus/id/lun triple - i.e. 0:0:0) and expecting it's adapter to have a unique name, which in the case of the PCI devices is the PCI bus name.

Dave

Revision history for this message
Peter Maydell (pmaydell) wrote : Re: [Qemu-devel] [Bug 1594239] Re: After adding more scsi disks for Aarch64 virtual machine, start the VM and got Qemu Error

On 5 July 2016 at 15:41, Tom Hanson <email address hidden> wrote:
> As noted above, virtio-scsi-pci uses a bus address as part of the
> internal ID string while virtio-scsi-device does not.
>
> * Is this difference intentional?

Probably not.

> * Are they intended to support different use cases? If so, what?

virtio-scsi-device is a "virtio backend" (other backends
include char, block and net devices), which plugs into a
"virtio bus". Virtio buses are provided by "virtio transports",
which can be PCI, s390 CCW, or virtio-mmio, and any particular
virtio bus has either 0 or 1 virtio backends on it.

The general idea is that this is a clean-ish separation between
the "what is this virtio device doing" and "how exactly do you
access it". virtio-scsi-pci is one of the family of virtio-*-pci
devices which combine a virtio-pci transport and a virtio-foo
backend in one convenient package -- we provide these (a) for
backward compatibility with old command lines etc from before
we made the backend/transport split and (b) for convenience
since the command lines are shorter than if you specify
the transport and backend separately manually.

(Note that a virtio-pci transport is a pci device, not a
pci bus -- it plugs into a pci bus on one and has a
virtio-bus on the other.)

thanks
-- PMM

Revision history for this message
Peter Maydell (pmaydell) wrote :

On 5 July 2016 at 16:02, Peter Maydell <email address hidden> wrote:
> (Note that a virtio-pci transport is a pci device, not a
> pci bus -- it plugs into a pci bus on one and has a
> virtio-bus on the other.)

Should read "on one end".

thanks
-- PMM

Revision history for this message
Tom Hanson (thomas-hanson) wrote :

So, in the original minimal command line above (#3) is the transport/bus missing? Or is mmio implied? Or?

Revision history for this message
Laszlo Ersek (Red Hat) (lersek) wrote :

I don't think this difference is intentional. I think we're seeing an interplay between the following two commits:

* http://git.qemu.org/?p=qemu.git;a=commitdiff;h=4d2ffa08b601b
* http://git.qemu.org/?p=qemu.git;a=commitdiff;h=7685ee6abcb93

Referring to the message of the second commit above, the problem is that sysbus doesn't implement get_dev_path, hence it doesn't support "creating unique savevm id strings".

virtio_bus_get_dev_path() simply defers to the parent bus ("main-system-bus" in this case).

Revision history for this message
Tom Hanson (thomas-hanson) wrote : Re: [Bug 1594239] Re: After adding more scsi disks for Aarch64 virtual machine, start the VM and got Qemu Error
Download full text (9.7 KiB)

What would a device path look like for an MMIO backend?
On Jul 5, 2016 9:35 AM, "Laszlo Ersek (Red Hat)" <email address hidden> wrote:

> I don't think this difference is intentional. I think we're seeing an
> interplay between the following two commits:
>
> * http://git.qemu.org/?p=qemu.git;a=commitdiff;h=4d2ffa08b601b
> * http://git.qemu.org/?p=qemu.git;a=commitdiff;h=7685ee6abcb93
>
> Referring to the message of the second commit above, the problem is that
> sysbus doesn't implement get_dev_path, hence it doesn't support
> "creating unique savevm id strings".
>
> virtio_bus_get_dev_path() simply defers to the parent bus ("main-system-
> bus" in this case).
>
> --
> You received this bug notification because you are subscribed to the bug
> report.
> https://bugs.launchpad.net/bugs/1594239
>
> Title:
> After adding more scsi disks for Aarch64 virtual machine, start the VM
> and got Qemu Error
>
> Status in QEMU:
> Confirmed
>
> Bug description:
> Description
> ===========
> Using virt-manager to create a VM in Aarch64, Ubuntu 16.04.
> Add scsi disk to the VM. After add four or more scsi disks, start the VM
> and will got Qemu error.
>
> Steps to reproduce
> ==================
> 1.Use virt-manager to create a VM.
> 2.After the VM is started, add scsi disk to the VM. They will be
> allocated to "sdb,sdc,sdd....." .
> 3.If we got a disk name > sdg, virt-manager will also assign a
> virtio-scsi controller for this disk.And the VM will be shutdown.
> 4.Start the VM, will see the error log.
>
>
> Expected result
> ===============
> Start the vm smoothly.The added disks can work.
>
> Actual result
> =============
> Got the error:
> starting domain: internal error: process exited while connecting to
> monitor: qemu-system-aarch64:
> /build/qemu-zxCwKP/qemu-2.5+dfsg/migration/savevm.c:620:
> vmstate_register_with_alias_id: Assertion `!se->compat || se->instance_id
> == 0' failed.
> details=Traceback (most recent call last):
> File "/usr/share/virt-manager/virtManager/asyncjob.py", line 90, in
> cb_wrapper
> callback(asyncjob, *args, **kwargs)
> File "/usr/share/virt-manager/virtManager/asyncjob.py", line 126, in
> tmpcb
> callback(*args, **kwargs)
> File "/usr/share/virt-manager/virtManager/libvirtobject.py", line 83,
> in newfn
> ret = fn(self, *args, **kwargs)
> File "/usr/share/virt-manager/virtManager/domain.py", line 1402, in
> startup
> self._backend.create()
> File "/usr/local/lib/python2.7/dist-packages/libvirt.py", line 1035,
> in create
> if ret == -1: raise libvirtError ('virDomainCreate() failed',
> dom=self)
> libvirtError: internal error: process exited while connecting to
> monitor: qemu-system-aarch64:
> /build/qemu-zxCwKP/qemu-2.5+dfsg/migration/savevm.c:620:
> vmstate_register_with_alias_id: Assertion `!se->compat || se->instance_id
> == 0' failed.
>
>
> Environment
> ===========
> 1. virt-manager version is 1.3.2
>
> 2. Which hypervisor did you use?
> Libvirt+KVM
> $ kvm --version
> QEMU emulator version 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.1),
> Copyright (c) 2003-2008 Fabrice Bellard
> $ libvirtd --version
> ...

Read more...

Revision history for this message
Laszlo Ersek (Red Hat) (lersek) wrote :

I'll try to come up with a patch, if for nothing more than illustration.

Revision history for this message
Peter Maydell (pmaydell) wrote : Re: [Qemu-devel] [Bug 1594239] Re: After adding more scsi disks for Aarch64 virtual machine, start the VM and got Qemu Error

On 5 July 2016 at 16:20, Tom Hanson <email address hidden> wrote:
> So, in the original minimal command line above (#3) is the transport/bus
> missing? Or is mmio implied? Or?

The virt board creates a collection of virtio-mmio transports,
so if you create just a backend on the command line (via
"-device virtio-scsi-device") it will be plugged into a
virtio-bus on a virtio-mmio transport.

You almost certainly didn't want to do this -- virtio-mmio
is only there for legacy reasons [it predates pci support
in the 'virt' board and the device-tree-driven kernel and
for a time it was the only way to do virtio].

You want to use virtio-pci, which is more flexible and has more
features (so either use -device virtio-scsi-pci, or use -device
virtio-scsi-device with -device virtio-pci and suitable id/bus
options to explicitly wire them together).

thanks
-- PMM

Revision history for this message
Tom Hanson (thomas-hanson) wrote :

I haven't dug into the code for this particular aspect (yet) but it sounds like when a scsi-hd device is specified with a virtio backend but with no virtio bus specified, it is defaulting to an MMIO bus. Is this correct?

A few questions:

1) Is it valid for a SCSI drive to default to an MMIO bus/backend? Or should it have defaulted to PCI?

2) Given that the 2 scsi-hd devices were specified with no bus, no ID, and no LUN was there anything incorrect in how QEMU handled them? (Other than a more verbose error message being desirable.)

3) In the general case of a MMIO device, do they need to have a unique dev path? In the real world, there's no bus, no bus address, nothing that looks like a dev path. Just a memory address.

4) Or is it the case that MMIO devices need to be unique based solely on the device characteristics?

Revision history for this message
Tom Hanson (thomas-hanson) wrote :

On 07/05/2016 10:29 AM, Peter Maydell wrote:
...
> The virt board creates a collection of virtio-mmio transports,
> so if you create just a backend on the command line (via
> "-device virtio-scsi-device") it will be plugged into a
> virtio-bus on a virtio-mmio transport.
>
> You almost certainly didn't want to do this -- virtio-mmio
> is only there for legacy reasons [it predates pci support
> in the 'virt' board and the device-tree-driven kernel and
> for a time it was the only way to do virtio].

Should this behavior be changed? Use PCI as a default instead?

Revision history for this message
Laszlo Ersek (Red Hat) (lersek) wrote :
Revision history for this message
Tom Hanson (thomas-hanson) wrote :

I tested Laszlo's patch against this scenario and it eliminated the error.

However, I'm still not convinced that it's needed.

Let's start with a basic question: Does it make sense for there to be more than one MMIO "bus" on a system?

After all, it's NOT a physical bus and there's only one set of physical memory (at least on anything we're currently modelling).

Would it make as much sense to disallow a second virtio-mmio bus??

Revision history for this message
Laszlo Ersek (Red Hat) (lersek) wrote :

A virtio-mmio "bus" is a single-device transport. It has a fixed base address that is set at board creation time. The MMIO area is 0x200 bytes in size, and hosts the virtio registers for one device that can sit on this transport. Transports can be unused.

The "virt" machtype creates 32 transports (= 32 virtio-mmio "buses" suitable for one virtio device each). This allows for 32 virtio devices exposed via virtio-mmio. The placement of the different virtio-mmio "buses" at specific addresses in MMIO space is board specific.

So yes, it definitely makes sense to create several of these "buses". It's better to think of a single virtio-mmio "bus" as a virtio-mmio "transport" or "register block". The "bus" terminology is just an internal QEMU detail. (It is not enumerable in hardware, for example.)

Revision history for this message
Peter Maydell (pmaydell) wrote :

On 5 July 2016 at 17:43, Tom Hanson <email address hidden> wrote:
> On 07/05/2016 10:29 AM, Peter Maydell wrote:
> ...
>> The virt board creates a collection of virtio-mmio transports,
>> so if you create just a backend on the command line (via
>> "-device virtio-scsi-device") it will be plugged into a
>> virtio-bus on a virtio-mmio transport.
>>
>> You almost certainly didn't want to do this -- virtio-mmio
>> is only there for legacy reasons [it predates pci support
>> in the 'virt' board and the device-tree-driven kernel and
>> for a time it was the only way to do virtio].
>
> Should this behavior be changed? Use PCI as a default instead?

This would break command line compatibility, which is
something we generally try to avoid. If you want
pci, why not just use virtio-scsi-pci ?

thanks
-- PMM

Revision history for this message
Tom Hanson (thomas-hanson) wrote : Re: [Bug 1594239] Re: After adding more scsi disks for Aarch64 virtual machine, start the VM and got Qemu Error
Download full text (10.0 KiB)

OK, that makes sense. I was thinking that the MMIO transport would/could
support multiple register blocks and thus multiple devices.

On 5 July 2016 at 13:26, Laszlo Ersek (Red Hat) <email address hidden> wrote:

> A virtio-mmio "bus" is a single-device transport. It has a fixed base
> address that is set at board creation time. The MMIO area is 0x200 bytes
> in size, and hosts the virtio registers for one device that can sit on
> this transport. Transports can be unused.
>
> The "virt" machtype creates 32 transports (= 32 virtio-mmio "buses"
> suitable for one virtio device each). This allows for 32 virtio devices
> exposed via virtio-mmio. The placement of the different virtio-mmio
> "buses" at specific addresses in MMIO space is board specific.
>
> So yes, it definitely makes sense to create several of these "buses".
> It's better to think of a single virtio-mmio "bus" as a virtio-mmio
> "transport" or "register block". The "bus" terminology is just an
> internal QEMU detail. (It is not enumerable in hardware, for example.)
>
> --
> You received this bug notification because you are subscribed to the bug
> report.
> https://bugs.launchpad.net/bugs/1594239
>
> Title:
> After adding more scsi disks for Aarch64 virtual machine, start the VM
> and got Qemu Error
>
> Status in QEMU:
> Confirmed
>
> Bug description:
> Description
> ===========
> Using virt-manager to create a VM in Aarch64, Ubuntu 16.04.
> Add scsi disk to the VM. After add four or more scsi disks, start the VM
> and will got Qemu error.
>
> Steps to reproduce
> ==================
> 1.Use virt-manager to create a VM.
> 2.After the VM is started, add scsi disk to the VM. They will be
> allocated to "sdb,sdc,sdd....." .
> 3.If we got a disk name > sdg, virt-manager will also assign a
> virtio-scsi controller for this disk.And the VM will be shutdown.
> 4.Start the VM, will see the error log.
>
>
> Expected result
> ===============
> Start the vm smoothly.The added disks can work.
>
> Actual result
> =============
> Got the error:
> starting domain: internal error: process exited while connecting to
> monitor: qemu-system-aarch64:
> /build/qemu-zxCwKP/qemu-2.5+dfsg/migration/savevm.c:620:
> vmstate_register_with_alias_id: Assertion `!se->compat || se->instance_id
> == 0' failed.
> details=Traceback (most recent call last):
> File "/usr/share/virt-manager/virtManager/asyncjob.py", line 90, in
> cb_wrapper
> callback(asyncjob, *args, **kwargs)
> File "/usr/share/virt-manager/virtManager/asyncjob.py", line 126, in
> tmpcb
> callback(*args, **kwargs)
> File "/usr/share/virt-manager/virtManager/libvirtobject.py", line 83,
> in newfn
> ret = fn(self, *args, **kwargs)
> File "/usr/share/virt-manager/virtManager/domain.py", line 1402, in
> startup
> self._backend.create()
> File "/usr/local/lib/python2.7/dist-packages/libvirt.py", line 1035,
> in create
> if ret == -1: raise libvirtError ('virDomainCreate() failed',
> dom=self)
> libvirtError: internal error: process exited while connecting to
> monitor: qemu-system-aarch64:
> /build/qemu-zxCwKP/qemu-2.5+dfsg/migration/savevm.c:620:
> vmstate_regist...

Revision history for this message
Tom Hanson (thomas-hanson) wrote :
Download full text (10.5 KiB)

So, given the 1 register block per virt-mmio "bus" then I agree that we
need a "dev path" distinction between them.

On 5 July 2016 at 14:22, Thomas Hanson <email address hidden> wrote:

> OK, that makes sense. I was thinking that the MMIO transport would/could
> support multiple register blocks and thus multiple devices.
>
> On 5 July 2016 at 13:26, Laszlo Ersek (Red Hat) <email address hidden> wrote:
>
>> A virtio-mmio "bus" is a single-device transport. It has a fixed base
>> address that is set at board creation time. The MMIO area is 0x200 bytes
>> in size, and hosts the virtio registers for one device that can sit on
>> this transport. Transports can be unused.
>>
>> The "virt" machtype creates 32 transports (= 32 virtio-mmio "buses"
>> suitable for one virtio device each). This allows for 32 virtio devices
>> exposed via virtio-mmio. The placement of the different virtio-mmio
>> "buses" at specific addresses in MMIO space is board specific.
>>
>> So yes, it definitely makes sense to create several of these "buses".
>> It's better to think of a single virtio-mmio "bus" as a virtio-mmio
>> "transport" or "register block". The "bus" terminology is just an
>> internal QEMU detail. (It is not enumerable in hardware, for example.)
>>
>> --
>> You received this bug notification because you are subscribed to the bug
>> report.
>> https://bugs.launchpad.net/bugs/1594239
>>
>> Title:
>> After adding more scsi disks for Aarch64 virtual machine, start the VM
>> and got Qemu Error
>>
>> Status in QEMU:
>> Confirmed
>>
>> Bug description:
>> Description
>> ===========
>> Using virt-manager to create a VM in Aarch64, Ubuntu 16.04.
>> Add scsi disk to the VM. After add four or more scsi disks, start the
>> VM and will got Qemu error.
>>
>> Steps to reproduce
>> ==================
>> 1.Use virt-manager to create a VM.
>> 2.After the VM is started, add scsi disk to the VM. They will be
>> allocated to "sdb,sdc,sdd....." .
>> 3.If we got a disk name > sdg, virt-manager will also assign a
>> virtio-scsi controller for this disk.And the VM will be shutdown.
>> 4.Start the VM, will see the error log.
>>
>>
>> Expected result
>> ===============
>> Start the vm smoothly.The added disks can work.
>>
>> Actual result
>> =============
>> Got the error:
>> starting domain: internal error: process exited while connecting to
>> monitor: qemu-system-aarch64:
>> /build/qemu-zxCwKP/qemu-2.5+dfsg/migration/savevm.c:620:
>> vmstate_register_with_alias_id: Assertion `!se->compat || se->instance_id
>> == 0' failed.
>> details=Traceback (most recent call last):
>> File "/usr/share/virt-manager/virtManager/asyncjob.py", line 90, in
>> cb_wrapper
>> callback(asyncjob, *args, **kwargs)
>> File "/usr/share/virt-manager/virtManager/asyncjob.py", line 126, in
>> tmpcb
>> callback(*args, **kwargs)
>> File "/usr/share/virt-manager/virtManager/libvirtobject.py", line 83,
>> in newfn
>> ret = fn(self, *args, **kwargs)
>> File "/usr/share/virt-manager/virtManager/domain.py", line 1402, in
>> startup
>> self._backend.create()
>> File "/usr/local/lib/python2.7/dist-packages/libvirt.py", line 1035,
>...

Revision history for this message
Laszlo Ersek (Red Hat) (lersek) wrote :

Fixed in commit f58b39d2d5b6dea1a757e1dc7d67a44eac1c4f9c.

Changed in qemu:
status: Confirmed → Fix Committed
Revision history for this message
Thomas Huth (th-huth) wrote :

Fix has been included in QEMU v2.7.0 --> changing status to Fix Released.

Changed in qemu:
status: Fix Committed → Fix Released
Revision history for this message
PabloSaenz (pabzum) wrote :

I’m sorry to post as a newbie here. I just want to confirm the bug described above by Kevin Zhao. At least that’s what it sounds like to me. I had a perfectly working VM on Qemu/KVM, with the VirtManager hypervisor, which I have *only* to be able to access an Audigy pci card from Windows XP. After I added a /dev/sbd disk as a device, the VM wouldn’t boot and even froze my system. Then I disconnected the disk from the VM, and now, though it doesn’t crash or freeze, I still can’t get into the VM, and I get the error message below:

«Error starting domain: internal error: process exited while connecting to monitor: 2016-11-18T22:45:31.643085Z qemu-system-x86_64: -drive file=/home/[folder]/[folder]/VWinXP.raw,format=raw,if=none,id=drive-ide0-0-0: Could not open '/home/[folder]/[folder]/VWinXP.raw': Permission denied»

I have changing permissions for the .raw file, but −intriguinly, for a newbie like me− everytime I try to open the VM, the permissions are changed automatically back to:

Owner = Libvirt Qemu
Group = kvm

I know this is not a help forum, but I would be grateful for some feedback. Been trying to fix this non-stop for the last two days.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.