copied cpu flags don't match host cpu

Bug #1561019 reported by Felipe Reyes
14
This bug affects 1 person
Affects Status Importance Assigned to Milestone
libvirt
Fix Released
Undecided
libvirt (Ubuntu)
Won't Fix
Medium
Unassigned
Wily
Won't Fix
Undecided
Unassigned
Xenial
Won't Fix
Undecided
Unassigned

Bug Description

Using wily (libvirt 1.2.16-2ubuntu11.15.10.3) in a AMD FX-8350 nested KVM doesn't work, unless in the definition of the vm the cpu is configured with mode='host-passthrough', for other CPUs the manifestation could be different.

Steps to reproduce:

L0: wily

$ uvt-kvm create --memory 2048 --cpu 2 vm-L1 release=wily arch=amd64 # the L1 guest can be trusty too, it doesn't really matter
$ uvt-kvm ssh --insecure vm-L1
(vm-L1)$ sudo apt-get install cpu-checker
(vm-L1)$ sudo kvm-ok
INFO: Your CPU does not support KVM extensions
KVM acceleration can NOT be used
(vm-L1)$ sudo modprobe kvm-amd
modprobe: ERROR: could not insert 'kvm_amd': Operation not supported
(vm-L1)$ dmesg | grep kvm | tail -n1
[45890.075592] kvm: no hardware support

Expected Result:

kvm-ok says that virtualization is supported

Workaround:

1) Shutdown the VM (vm-L1)
2) In the kvm host edit vm-L1 definition (sudo virsh edit vm-L1) and change the cpu definition to:
  <cpu mode='host-passthrough'></cpu>
3) power on the VM

ProblemType: Bug
DistroRelease: Ubuntu 15.10
Package: libvirt-bin 1.2.16-2ubuntu11.15.10.3
ProcVersionSignature: Ubuntu 4.2.0-34.39-generic 4.2.8-ckt4
Uname: Linux 4.2.0-34-generic x86_64
NonfreeKernelModules: nvidia
ApportVersion: 2.19.1-0ubuntu5
Architecture: amd64
Date: Wed Mar 23 11:20:11 2016
InstallationDate: Installed on 2014-12-06 (472 days ago)
InstallationMedia: Ubuntu 14.10 "Utopic Unicorn" - Release amd64 (20141022.1)
SourcePackage: libvirt
UpgradeStatus: Upgraded to wily on 2016-03-08 (14 days ago)
modified.conffile..etc.apparmor.d.usr.lib.libvirt.virt.aa.helper: [modified]
modified.conffile..etc.libvirt.qemu.conf: [inaccessible: [Errno 13] Permission denied: '/etc/libvirt/qemu.conf']
modified.conffile..etc.libvirt.qemu.networks.default.xml: [inaccessible: [Errno 13] Permission denied: '/etc/libvirt/qemu/networks/default.xml']
mtime.conffile..etc.apparmor.d.usr.lib.libvirt.virt.aa.helper: 2016-03-14T13:22:07.751461

Revision history for this message
In , Richard (richard-redhat-bugs) wrote :
Download full text (11.6 KiB)

Description of problem:

This is using a SandyBridge CPU which has AVX instructions:
https://en.wikipedia.org/wiki/Advanced_Vector_Extensions

I'm booting a guest using <cpu mode="host-model"/>. Inside
the guest, when initializing an mdadm device (yes, this guest
has RAID arrays inside), we see the trace attached below.

I think what is happening here:

 (a) CPU flags are copied from host to guest, advertising 'avx'
 (b) Guest tries to use 'avx'.
 (c) KVM doesn't emulate it, so it all falls in a hole.

Perhaps libvirt should filter flags based on what KVM can actually do?

Version-Release number of selected component (if applicable):

qemu-1.2.0-16.fc18.x86_64
libvirt-0.10.2-3.fc18.x86_64
kernel-3.6.2-2.fc18.x86_64

How reproducible:

100%

Steps to Reproduce:
1. in libguestfs test suite: make -C tests/md check

Additional info:

Host CPU flags:
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 popcnt tsc_deadline_timer aes xsave avx lahf_lm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid

mdadm --create --run r5t1 --level 5 --raid-devices 4 --spare-devices 1 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2 missing
mdadm: Defaulting to version 1.2 metadata
[ 5.131487] md: bind<sda2>
[ 5.132218] md: bind<sdb2>
[ 5.132966] md: bind<sdc2>
[ 5.133773] md: bind<sdd2>
[ 5.150258] async_tx: api initialized (async)
[ 5.152459] xor: automatically using best checksumming function:
[ 5.153064] invalid opcode: 0000 [#1] SMP
[ 5.153423] Modules linked in: xor(+) async_tx raid1 ghash_clmulni_intel microcode virtio_net virtio_scsi virtio_blk virtio_rng virtio_balloon virtio_mmio sparse_keymap rfkill sym53c8xx scsi_transport_spi crc8 crc_ccitt crc_itu_t crc32c_intel libcrc32c
[ 5.154012] CPU 0
[ 5.154012] Pid: 262, comm: modprobe Not tainted 3.6.2-2.fc18.x86_64.debug #1 Bochs Bochs
[ 5.154012] RIP: 0010:[<ffffffffa0095c6c>] [<ffffffffa0095c6c>] xor_avx_2+0x5c/0x270 [xor]
[ 5.154012] RSP: 0018:ffff88001abfdd00 EFLAGS: 00010202
[ 5.154012] RAX: 000000008005003b RBX: ffff8800192d0000 RCX: 0000000000000001
[ 5.154012] RDX: ffff8800192d3000 RSI: ffff8800192d0000 RDI: 0000000000001000
[ 5.154012] RBP: ffff88001abfddc8 R08: 0000000000000000 R09: 0000000000000000
[ 5.154012] R10: 0000000000000001 R11: 0000000000000000 R12: ffff8800192d3000
[ 5.154012] R13: 0000000000000008 R14: 000000008005003b R15: ffff8800192d0000
[ 5.154012] FS: 00007f1e5e769740(0000) GS:ffff88001f000000(0000) knlGS:0000000000000000
[ 5.154012] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 5.154012] CR2: 00007fff1f8d5000 CR3: 0000000019278000 CR4: 00000000000007f0
[ 5.154012] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 5.154012] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[ 5.154012] Process modprobe (pid: 262, threadinfo ffff88001abfc000, task ffff88001a0fa450)
[ 5.154012] Stack:
[ ...

Revision history for this message
In , Richard (richard-redhat-bugs) wrote :

Instruction that KVM failed to parse was:
     bfc: c5 fc 29 04 24 vmovaps %ymm0,(%rsp)

Revision history for this message
In , Richard (richard-redhat-bugs) wrote :

Apparently you can't just change the libvirt XML to disable
features that don't work:

  <cpu mode="host-model">
    <model fallback="allow"/>
    <feature policy="disable" name="avx"/>
  </cpu>

gives the error:

*stdin*:6: libguestfs: error: could not create appliance through libvirt: internal error Non-empty feature list specified without CPU model [code=1 domain=31]

Revision history for this message
In , Jiri (jiri-redhat-bugs) wrote :

(In reply to comment #2)
> Apparently you can't just change the libvirt XML to disable
> features that don't work:
>
> <cpu mode="host-model">
> <model fallback="allow"/>
> <feature policy="disable" name="avx"/>
> </cpu>

Right, bug 799354 is tracking that.

Revision history for this message
In , Jiri (jiri-redhat-bugs) wrote :

Could you share the QEMU command line generated by libvirt? I believe, it does not explicitly mention avx, i.e., it gets there through SandyBridge model, right? Anyway, avx is supposed to work with KVM since QEMU supports SandyBridge model, which enables avx. Thus, it's either QEMU or kernel bug. I'm moving this bug to the former for further investigation.

Revision history for this message
In , Richard (richard-redhat-bugs) wrote :

LC_ALL=C LD_LIBRARY_PATH=/tmp/whenjobs2f2d92b86ba2111addc7e199fa77e648/libguestfs-1.19.53/src/.libs:/tmp/whenjobs2f2d92b86ba2111addc7e199fa77e648/libguestfs-1.19.53/gobject/.libs:/tmp/whenjobs2f2d92b86ba2111addc7e199fa77e648/libguestfs-1.19.53/ruby/ext/guestfs PATH=/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/rjones/.local/bin:/home/rjones/bin HOME=/home/rjones USER=rjones LOGNAME=rjones TMPDIR=/home/rjones/d/libguestfs/tmp /usr/bin/qemu-kvm -name guestfs-1t6f28e33d5tqbmu -S -M pc-1.2 -cpu Westmere,+rdtscp,+avx,+osxsave,+xsave,+tsc-deadline,+pdcm,+xtpr,+tm2,+est,+vmx,+ds_cpl,+monitor,+dtes64,+pclmuldq,+pbe,+tm,+ht,+ss,+acpi,+ds,+vme -enable-kvm -m 500 -smp 1,sockets=1,cores=1,threads=1 -uuid 3ab7b5c6-31ff-2591-bc35-4d338347423c -nographic -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/home/rjones/.config/libvirt/qemu/lib/guestfs-1t6f28e33d5tqbmu.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-reboot -no-shutdown -no-acpi -kernel /home/rjones/d/libguestfs/tmp/.guestfs-1000/kernel.17966 -initrd /home/rjones/d/libguestfs/tmp/.guestfs-1000/initrd.17966 -append panic=1 console=ttyS0 udevtimeout=600 no_timer_check acpi=off printk.time=1 cgroup_disable=memory root=/dev/sde selinux=0 guestfs_verbose=1 TERM=xterm -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x3 -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive file=/home/rjones/d/libguestfs/tests/md/md-test1.img,if=none,id=drive-scsi0-0-0-0,format=raw,cache=none -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0,bootindex=1 -drive file=/home/rjones/d/libguestfs/tests/md/md-test2.img,if=none,id=drive-scsi0-0-1-0,format=raw,cache=none -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=1,lun=0,drive=drive-scsi0-0-1-0,id=scsi0-0-1-0 -drive file=/home/rjones/d/libguestfs/tests/md/md-test3.img,if=none,id=drive-scsi0-0-2-0,format=raw,cache=none -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=2,lun=0,drive=drive-scsi0-0-2-0,id=scsi0-0-2-0 -drive file=/home/rjones/d/libguestfs/tests/md/md-test4.img,if=none,id=drive-scsi0-0-3-0,format=raw,cache=none -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=3,lun=0,drive=drive-scsi0-0-3-0,id=scsi0-0-3-0 -drive file=/home/rjones/d/libguestfs/tmp/libguestfsowvgCb/snapshot1,if=none,id=drive-scsi0-0-4-0,format=qcow2,cache=unsafe -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=4,lun=0,drive=drive-scsi0-0-4-0,id=scsi0-0-4-0 -chardev socket,id=charserial0,path=/home/rjones/d/libguestfs/tmp/libguestfsowvgCb/console.sock -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,path=/home/rjones/d/libguestfs/tmp/libguestfsowvgCb/guestfsd.sock -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.libguestfs.channel.0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5

Revision history for this message
In , Cole (cole-redhat-bugs) wrote :

Eduardo, can you comment here?

Revision history for this message
In , Jiri (jiri-redhat-bugs) wrote :

Interesting, your SandyBridge machine is likely missing x2apic feature. If had that feature, libvirt would report SandyBridge instead of Westmere + avx. Could you try changing the XML to contain

  <cpu mode='custom' match='exact'>
    <model fallback='forbid'>SandyBridge</model>
    <feature policy='disable' name='x2apic'/>
  </cpu>

and see if that makes any difference?

Revision history for this message
In , Richard (richard-redhat-bugs) wrote :
Download full text (7.3 KiB)

Yes, this works:

  <cpu mode='custom' match='exact'>
    <model fallback='forbid'>SandyBridge</model>
    <feature policy='disable' name='x2apic'/>
  </cpu>

Host /proc/cpuinfo is below. It is indeed missing x2apic.

processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 42
model name : Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
stepping : 7
microcode : 0x28
cpu MHz : 1600.000
cache size : 8192 KB
physical id : 0
siblings : 8
core id : 0
cpu cores : 4
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 popcnt tsc_deadline_timer aes xsave avx lahf_lm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid
bogomips : 6822.32
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:

processor : 1
vendor_id : GenuineIntel
cpu family : 6
model : 42
model name : Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
stepping : 7
microcode : 0x28
cpu MHz : 1600.000
cache size : 8192 KB
physical id : 0
siblings : 8
core id : 1
cpu cores : 4
apicid : 2
initial apicid : 2
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 popcnt tsc_deadline_timer aes xsave avx lahf_lm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid
bogomips : 6822.32
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:

processor : 2
vendor_id : GenuineIntel
cpu family : 6
model : 42
model name : Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
stepping : 7
microcode : 0x28
cpu MHz : 1600.000
cache size : 8192 KB
physical id : 0
siblings : 8
core id : 2
cpu cores : 4
apicid : 4
initial apicid : 4
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 popcnt tsc_deadline_timer aes xsave avx lahf_lm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid
bogomips : 6822.32
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:

processor : 3
vendor_id : GenuineIntel
cpu family : 6
model : 42
model name : Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
stepping : 7
microcode : 0x28
cpu MHz : 1600.000
cache size : 8192 KB
physical id : 0
siblings : 8
core id : 3
cpu cores : 4
apicid : 6
init...

Read more...

Revision history for this message
In , Jiri (jiri-redhat-bugs) wrote :

Thanks, what about:

  <cpu mode='custom' match='exact'>
    <model fallback='forbid'>SandyBridge</model>
    <feature policy='disable' name='x2apic'/>
    <feature policy='require' name='osxsave'/>
    <feature policy='require' name='pdcm'/>
    <feature policy='require' name='xtpr'/>
    <feature policy='require' name='tm2'/>
    <feature policy='require' name='est'/>
    <feature policy='require' name='vmx'/>
    <feature policy='require' name='ds_cpl'/>
    <feature policy='require' name='monitor'/>
    <feature policy='require' name='dtes64'/>
    <feature policy='require' name='pbe'/>
    <feature policy='require' name='tm'/>
    <feature policy='require' name='ht'/>
    <feature policy='require' name='ss'/>
    <feature policy='require' name='acpi'/>
    <feature policy='require' name='ds'/>
    <feature policy='require' name='vme'/>
  </cpu>

that should give you the same (feature-wise) CPU but using SandyBridge rather
than Westmere model.

And BTW, it looks like we have a bug since

  <cpu mode='custom' match='exact'>
    <model fallback='forbid'>SandyBridge</model>
    <feature policy='force' name='x2apic'/>
  </cpu>

should work even if the host CPU does not support x2apic (and AFAIK x2apic is
one of the features that QEMU will emulate) but I tried that and libvirt is
complaining that x2apic is not supported by host CPU.

Revision history for this message
In , Richard (richard-redhat-bugs) wrote :

This works:

  <cpu mode='custom' match='exact'>
    <model fallback='forbid'>SandyBridge</model>
    <feature policy='disable' name='x2apic'/>
    <feature policy='require' name='osxsave'/>
    <feature policy='require' name='pdcm'/>
    <feature policy='require' name='xtpr'/>
    <feature policy='require' name='tm2'/>
    <feature policy='require' name='est'/>
    <feature policy='require' name='vmx'/>
    <feature policy='require' name='ds_cpl'/>
    <feature policy='require' name='monitor'/>
    <feature policy='require' name='dtes64'/>
    <feature policy='require' name='pbe'/>
    <feature policy='require' name='tm'/>
    <feature policy='require' name='ht'/>
    <feature policy='require' name='ss'/>
    <feature policy='require' name='acpi'/>
    <feature policy='require' name='ds'/>
    <feature policy='require' name='vme'/>
  </cpu>

I also tried above plus:
    <feature policy='require' name='avx'/>
which *worked*. Is that expected?

Revision history for this message
In , Jiri (jiri-redhat-bugs) wrote :

The <feature policy='require' name='avx'/> element is redundant since avx is already required by the SandyBridge model, the guest OS should see exactly the same CPU regardless on this element.

Anyway, it's expected that SandyBridge works with avx since it explicitly has support for it. The fact that it doesn't work when it's added on top of Westmere is unfortunate but not entirely surprising. It's likely influenced by bits that are not covered by libvirt, such as cpu family, model, stepping and other stuff. We've seen this behaviour in the past.

I think we need a new mode in addition to custom, host-model, and host-passthrough, that would be similar to host-model but will only use bare CPU model without trying to add all features that are not included in the model but supported by host CPU.

The situation may also become a bit better once we have a better interface for CPU probing (bug 824989).

Eduardo, could you confirm that the kernel panic might be caused by libvirt using Westmere + avx and that is an unsupported configuration? If so, we can move this bug to libvirt.

Revision history for this message
In , Eduardo (eduardo-redhat-bugs) wrote :

"-cpu Westmere,+avx" actually should enable the bit on CPUID if and only if KVM is able to handle the feature. When KVM can't handle the feature, it should be filtered out before the guest CPUID table is built. I still don't understand why exactly the guest got an invalid operation exception, as the instruction was supposed to be working. Maybe it's related to the "level" field and the xsave feature (that is required for AVX, as far as I recall), that needs level >= 0xD.

I don't know if the guest is really allowed to use the feature when the AVX bit is set but the necessary xsave bits are not present (in it is not, then this is a guest bug). If the guest was simply misled by the CPUID information, and correct in trying to use the instructions, it is a QEMU bug (QEMU should have disabled the feature, and abort in case the "enforce" flag is set). On either case, it is not a libvirt bug to ask for "-cpu Westmere,+avx".

But it would be interesting if libvirt could treat some CPU features as "can be safely disabled". It would be much better if libvirt used "-cpu SandyBridge,-x2apic" on that host, instead of "-cpu Westmere,+<lots of flags>".

Revision history for this message
In , Cole (cole-redhat-bugs) wrote :

Reassigning to libvirt based on above discussion

Revision history for this message
In , Richard (richard-redhat-bugs) wrote :

Do we have an update on this? I would really like to
start using host-model.

Revision history for this message
In , Richard (richard-redhat-bugs) wrote :

This still happens with Fedora 19, libvirt-1.0.5.5-1.fc19.x86_64.

Loading the btrfs module, which loads the xor module, fails
because it tries to run an AVX instruction:

modprobe btrfs
[ 1.804591] xor: automatically using best checksumming function:
[ 1.806020] invalid opcode: 0000 [#1] SMP
[ 1.806416] Modules linked in: xor(+) snd_pcsp snd_pcm snd_page_alloc ghash_clmulni_intel snd_timer microcode snd soundcore virtio_net virtio_scsi virtio_blk virtio_rng virtio_balloon virtio_mmio sparse_keymap rfkill sym53c8xx scsi_transport_spi crc8 crc_ccitt crc32 crc_itu_t crc32_pclmul crc32c_intel libcrc32c megaraid megaraid_sas megaraid_mbox megaraid_mm
[ 1.809709] CPU: 0 PID: 150 Comm: modprobe Not tainted 3.10.9-200.fc19.x86_64.debug #1
[ 1.810397] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
[ 1.810931] task: ffff880019cf8000 ti: ffff880019ca4000 task.ti: ffff880019ca4000
[ 1.811597] RIP: 0010:[<ffffffffa0119da0>] [<ffffffffa0119da0>] xor_avx_2+0x50/0x230 [xor]
[ 1.812333] RSP: 0018:ffff880019ca5d08 EFLAGS: 00010202
[ 1.812811] RAX: 0000000000000007 RBX: ffff880019eb8000 RCX: ffff880019cf8000
[ 1.813429] RDX: ffff880019ca5fd8 RSI: 0000000000000000 RDI: 0000000000001000
[ 1.814071] RBP: ffff880019ca5d20 R08: 0000000000000002 R09: 0000000000000000
[ 1.814684] R10: 0000000000000001 R11: 0000000000000001 R12: ffff880019ebb000
[ 1.815320] R13: 0000000000000008 R14: ffff880019eb8000 R15: 00000000fffb732e
[ 1.815952] FS: 00007fb93b96a740(0000) GS:ffff88001f000000(0000) knlGS:0000000000000000
[ 1.816647] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 1.817170] CR2: 00007f8d740e9000 CR3: 0000000019e2b000 CR4: 00000000000007f0
[ 1.817785] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 1.818424] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[ 1.819047] Stack:
[ 1.819245] 0000000000000000 ffffffffa011c000 ffff880019ebb000 ffff880019ca5d60
[ 1.819929] ffffffffa00a7080 0000000000000005 ffff880019eb8000 ffff880019ebb000
[ 1.820651] ffffffffa011c110 0000000000000001 ffffffffa011c0c0 ffff880019ca5d80
[ 1.821354] Call Trace:
[ 1.821581] [<ffffffffa00a7080>] do_xor_speed+0x80/0xe0 [xor]
[ 1.822097] [<ffffffffa00a714b>] calibrate_xor_blocks+0x6b/0xf20 [xor]
[ 1.822683] [<ffffffffa00a70e0>] ? do_xor_speed+0xe0/0xe0 [xor]
[ 1.823214] [<ffffffff810020e2>] do_one_initcall+0xe2/0x1a0
[ 1.823723] [<ffffffff810e95e2>] load_module+0x1c62/0x27d0
[ 1.824217] [<ffffffff813749d0>] ? ddebug_proc_write+0xf0/0xf0
[ 1.824748] [<ffffffff810ea2e6>] SyS_finit_module+0x86/0xb0
[ 1.825247] [<ffffffff81720099>] system_call_fastpath+0x16/0x1b
[ 1.825779] Code: 01 00 00 65 48 8b 04 25 f0 c8 00 00 83 80 44 e0 ff ff 01 e8 23 7a f0 e0 4d 85 ed 49 8d 45 ff 0f 84 9b 01 00 00 66 0f 1f 44 00 00 <c4> c1 7d 6f 04 24 c5 fc 57 03 c5 fd 7f 03 c4 c1 7d 6f 4c 24 20
[ 1.828335] RIP [<ffffffffa0119da0>] xor_avx_2+0x50/0x230 [xor]
[ 1.828869] RSP <ffff880019ca5d08>
[ 1.829213] ---[ end trace 70ce68c981f09edb ]---

However using plain old -cpu host on the qemu command line works fine.

Revision history for this message
In , Richard (richard-redhat-bugs) wrote :
Download full text (3.4 KiB)

Since this bug has been around for almost *a year*, and it's extremely
annoying, I'm trying to work out if this is a bug in the guest kernel,
qemu, or libvirt. I'm not any closer to working that out.

libvirt passes the following CPU/machine-related flags:

  -machine pc-i440fx-1.6,accel=kvm,usb=off
  -cpu Westmere,+rdtscp,+avx,+osxsave,+xsave,+tsc-deadline,+pcid,+pdcm,+xtpr,+tm2,+est,+vmx,+ds_cpl,+monitor,+dtes64,+pclmuldq,+pbe,+tm,+ht,+ss,+acpi,+ds,+vme
  -m 500
  -realtime mlock=off
  -smp 1,sockets=1,cores=1,threads=1

Host CPU flags are reported to be:

flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 popcnt tsc_deadline_timer aes xsave avx lahf_lm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid
cpuid level : 13

Guest CPU flags are reported to be:

flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx rdtscp lm constant_tsc rep_good nopl pni pclmulqdq ssse3 cx16 pcid sse4_1 sse4_2 popcnt tsc_deadline_timer aes avx hypervisor lahf_lm
cpuid level : 13

(In reply to Eduardo Habkost from comment #12)
> "-cpu Westmere,+avx" actually should enable the bit on CPUID if and only if
> KVM is able to handle the feature. When KVM can't handle the feature, it
> should be filtered out before the guest CPUID table is built. I still don't
> understand why exactly the guest got an invalid operation exception, as the
> instruction was supposed to be working. Maybe it's related to the "level"
> field and the xsave feature (that is required for AVX, as far as I recall),
> that needs level >= 0xD.

I see the xsave flag in the host CPU flags, and in the libvirt-generated qemu
command line. I do NOT see the xsave flag in the guest flags. Not sure what
that means.

Assuming "level" means "cpuid level", then both report 13 == 0xD.

> I don't know if the guest is really allowed to use the feature when the AVX
> bit is set but the necessary xsave bits are not present (in it is not, then
> this is a guest bug).

As far as I can tell from the kernel code, cpu_has_avx just checks the
avx feature flag. It doesn't check for xsave. The xor code which is
throwing the invalid opcode is only checking cpu_has_avx, ie. only checking
for the avx flag.

According to the Intel PRM it does appear that you shouldn't use avx unless
xsave is supported, although it doesn't appear to be an absolute requirement.
I'm assuming it's something to do with those extra registers not being
saved over a context switch, which doesn't sound like an invalid opcode
situation to me (corrupt data OTOH).

Why would xsave bit not be present in the guest?

> If the guest was simply misled by the CPUID
> information, and correct in trying to use the instructions, it is a QEMU bug
> (QEMU should have disabled the feature, and abort in case the "enforce" flag
> is set). On either case, it is not a libvirt bug to ask for "-cpu
> West...

Read more...

Revision history for this message
In , Richard (richard-redhat-bugs) wrote :

Also to confirm, the instruction which fails is an AVX instruction
(not xsave):

    1c60: c4 c1 7d 6f 04 24 vmovdqa (%r12),%ymm0

Revision history for this message
In , Richard (richard-redhat-bugs) wrote :

Still happening in Rawhide (albeit using the F19 kernel, because
the Rawhide kernel has other issues):

libvirt-1.1.2-1.fc21.x86_64
qemu-1.6.0-5.fc21.x86_64
kernel-3.10.10-200.fc19.x86_64

Revision history for this message
In , Richard (richard-redhat-bugs) wrote :

The following program compiled and ran fine on the host, so I
guess that indicates that the host has no problem with AVX
instructions:

        .text
        .globl main
main:
        movq $testdata,%r12
        vmovdqa (%r12),%ymm0
        /*movq (%r12),%r10*/
        movq $0,%rax
        ret

        .data
        .align 32
testdata:
        .float 1,2,3,4,5,6,7,8

Revision history for this message
In , Zeeshan (zeeshan-redhat-bugs) wrote :

From Boxes bug https://bugzilla.gnome.org/show_bug.cgi?id=720798

"When I run qemu with the '-cpu host' parameter instead, GNOME Shell starts
correctly."

Shouldn't libvirt just use that option of qemu for 'host-model' config?

Revision history for this message
In , Zeeshan (zeeshan-redhat-bugs) wrote :

Oops, didn't mean to remove the needinfo flag.

Revision history for this message
In , Richard (richard-redhat-bugs) wrote :

(In reply to Zeeshan Ali from comment #20)
> From Boxes bug https://bugzilla.gnome.org/show_bug.cgi?id=720798
>
> "When I run qemu with the '-cpu host' parameter instead, GNOME Shell starts
> correctly."
>
> Shouldn't libvirt just use that option of qemu for 'host-model' config?

host-passthrough maps to the qemu '-cpu host' parameter.

host-model is as convoluted as it is so that live migration can
be supported. ie. You start a guest on host A with host-model,
then migrate it to host B. What you DON'T want to happen is
that -cpu host is used on host B, since likely host A and host B
have different CPUs, so the guest will see a sudden change in CPU
capabilities and probably crash / get inconsistent results.
Instead host-model tries to create a full description of host A's
CPU, and after live migration the guest sees the same (host A)
CPU features.

AIUI host-passthrough disables or prevents live migration in some way.

Revision history for this message
In , Zeeshan (zeeshan-redhat-bugs) wrote :

(In reply to Richard W.M. Jones from comment #22)
> (In reply to Zeeshan Ali from comment #20)
> > From Boxes bug https://bugzilla.gnome.org/show_bug.cgi?id=720798
> >
> > "When I run qemu with the '-cpu host' parameter instead, GNOME Shell starts
> > correctly."
> >
> > Shouldn't libvirt just use that option of qemu for 'host-model' config?
>
> host-passthrough maps to the qemu '-cpu host' parameter.
>
> host-model is as convoluted as it is so that live migration can
> be supported. ie. You start a guest on host A with host-model,
> then migrate it to host B. What you DON'T want to happen is
> that -cpu host is used on host B, since likely host A and host B
> have different CPUs, so the guest will see a sudden change in CPU
> capabilities and probably crash / get inconsistent results.
> Instead host-model tries to create a full description of host A's
> CPU, and after live migration the guest sees the same (host A)
> CPU features.
>
> AIUI host-passthrough disables or prevents live migration in some way.

Ah, in that case I think we are actually better off using host-model in Boxes, at least for now. We don't have live migration support and if hardware changes, user will loose any saved state on the VMs.

Revision history for this message
In , Cole (cole-redhat-bugs) wrote :

Moving to the upstream tracker: there's ongoing work in libvirt and qemu to sort this out, but it's unlikely to ever be backportable to a stable Fedora, it will require a rebase.

Revision history for this message
In , Cole (cole-redhat-bugs) wrote :

*** Bug 1084576 has been marked as a duplicate of this bug. ***

Revision history for this message
In , Zeeshan (zeeshan-redhat-bugs) wrote :

Any update on this? We moved from 'host-model' to 'host-passthrough' to avoid this in Boxes and only now realized that that breaks support of non-KVM CPUs. :(

Revision history for this message
In , Richard (richard-redhat-bugs) wrote :

(In reply to Zeeshan Ali from comment #26)
> Any update on this? We moved from 'host-model' to 'host-passthrough' to
> avoid this in Boxes and only now realized that that breaks support of
> non-KVM CPUs. :(

Still a bug. It only happens on certain hardware (which
unfortunately I own) so it's rather hard to reproduce.

You should use host-passthrough, but disable it when the
<domain type="qemu">. This is the logic used by libguestfs:

https://github.com/libguestfs/libguestfs/blob/master/src/launch-libvirt.c#L1012

I would say your biggest problem with using host-passthrough
is surely that live migration doesn't work? (Of course we
don't care about that in libguestfs)

Revision history for this message
In , Zeeshan (zeeshan-redhat-bugs) wrote :

(In reply to Richard W.M. Jones from comment #27)
> (In reply to Zeeshan Ali from comment #26)
> > Any update on this? We moved from 'host-model' to 'host-passthrough' to
> > avoid this in Boxes and only now realized that that breaks support of
> > non-KVM CPUs. :(
>
> Still a bug. It only happens on certain hardware (which
> unfortunately I own) so it's rather hard to reproduce.
>
> You should use host-passthrough, but disable it when the
> <domain type="qemu">. This is the logic used by libguestfs:
>
> https://github.com/libguestfs/libguestfs/blob/master/src/launch-libvirt.
> c#L1012

Hmm.. Ah thanks. So by 'disable' you mean you just leave the model to libvirt? I was thinking of following the advice in libvirt docs:

"Beware, due to the way libvirt detects host CPU and due to the fact libvirt does not talk to QEMU/KVM when creating the CPU model, CPU configuration created using host-model may not work as expected. The guest CPU may differ from the configuration and it may also confuse guest OS by using a combination of CPU features and other parameters (such as CPUID level) that don't work. Until these issues are fixed, it's a good idea to avoid using host-model and use custom mode with just the CPU model from host capabilities XML."

> I would say your biggest problem with using host-passthrough
> is surely that live migration doesn't work? (Of course we
> don't care about that in libguestfs)

It certainly would be nice to have 'live migration' since Boxes always suspends VMs on exit and if you change your CPU in between, your only choice is to loose the saved state (which could easily mean lose of important data) but thats not currently my issue. Boxes unable to start the created VM on on-kvm hosts, is. :)

Revision history for this message
In , Richard (richard-redhat-bugs) wrote :

By "disable", I mean, don't include the <cpu> element when !kvm.
And also for ARM which doesn't have a concept of -cpu host. See
the code I linked to.

Revision history for this message
In , Paolo (paolo-redhat-bugs) wrote :

The bug is that features xsave and avx need level=13, but Westmere has 11.

QEMU should not expose xsave unless level=13. Does it work if you remove "+xsave" from the command line of comment 16?

Revision history for this message
In , Richard (richard-redhat-bugs) wrote :

I cannot reproduce this with 3.16.0-0.rc3.git2.1.fc21.x86_64

I suspect the kernel has been modified to workaround buggy
CPUID.

Previously:

[ 5.150258] async_tx: api initialized (async)
[ 5.152459] xor: automatically using best checksumming function:
[ 5.153064] invalid opcode: 0000 [#1] SMP

Now:

[ 2.198939] async_tx: api initialized (async)
[ 2.201931] xor: measuring software checksum speed
[ 2.212021] prefetch64-sse: 16340.000 MB/sec
[ 2.222012] generic_sse: 15188.000 MB/sec
[ 2.222856] xor: using function: prefetch64-sse (16340.000 MB/sec)
[ 2.239949] md: raid6 personality registered for level 6
[ 2.241063] md: raid5 personality registered for level 5
[ 2.242168] md: raid4 personality registered for level 4
[ 2.244093] md/raid:md127: device sda operational as raid disk 0
[ 2.247075] md/raid:md127: allocated 0kB
[ 2.247939] md/raid:md127: raid level 5 active with 1 out of 2 devices, algorithm 2
[ 2.249726] md127: detected capacity change from 0 to 103809024
[ 2.253250] md: recovery of RAID array md127
[ 2.255054] md: minimum _guaranteed_ speed: 1000 KB/sec/disk.
[ 2.256554] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
[ 2.258584] md: using 128k window, over a total of 101376k.
mdadm: array /dev/md/md started.

Revision history for this message
In , Cole (cole-redhat-bugs) wrote :

*** Bug 1271157 has been marked as a duplicate of this bug. ***

Revision history for this message
In , jean-christophe (jean-christophe-redhat-bugs-1) wrote :

I experience a similar issue with libvirt 1.2.20 on ubuntu 15.04 (3.19.0-30-generic #34-Ubuntu x86_64)

If I use "Copy host CPU configuration", virt-manager ends up using a Westmere profile instead of Haswell (i7 4700MQ)- more details here: https://bugzilla.redhat.com/show_bug.cgi?id=1271157

If I use "Haswell" profile, I get the following error:
Error starting domain: unsupported configuration: guest and host CPU are not compatible: Host CPU does not provide required features: rtm, hle, x2apic; try using 'Haswell-noTSX' CPU model

If I use "Haswell-noTSX" profile, I get the following error:
Error starting domain: unsupported configuration: guest and host CPU are not compatible: Host CPU does not provide required features: x2apic

Revision history for this message
In , Cole (cole-redhat-bugs) wrote :

*** Bug 1281971 has been marked as a duplicate of this bug. ***

Revision history for this message
Felipe Reyes (freyes) wrote :
Felipe Reyes (freyes)
description: updated
Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Hi,
if no cpu model is set the default is qemu64 which is rather conservative for
- maximum migratability
- guaranteeing the same feature set will be available wherever you start your KVM

But I thought svm/vmx would be part of that set.

A short test at least on Xenial proved my assumption that for me that is
flags : fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 syscall nx lm rep_good nopl pni vmx cx16 x2apic popcnt hypervisor lahf_lm abm tpr_shadow vnmi flexpriority ept vpid
Which contains vmx and kvm-ok is happy.

Could you provide a "cat /proc/cpuinfo" of host and guest as well as the full commandline that libvirt generated for you?

Changed in libvirt (Ubuntu):
status: New → Incomplete
Revision history for this message
Felipe Reyes (freyes) wrote : Re: [Bug 1561019] Re: copied cpu flags don't match host cpu

On Tue, 12 Apr 2016 16:49:39 -0000
ChristianEhrhardt <email address hidden> wrote:

> Hi,
> if no cpu model is set the default is qemu64 which is rather
> conservative for
> - maximum migratability
> - guaranteeing the same feature set will be available wherever you
> start your KVM
>
> But I thought svm/vmx would be part of that set.
>
> A short test at least on Xenial proved my assumption that for me that
> is flags : fpu de pse tsc msr pae mce cx8 apic sep mtrr pge
> mca cmov pse36 clflush mmx fxsr sse sse2 syscall nx lm rep_good nopl
> pni vmx cx16 x2apic popcnt hypervisor lahf_lm abm tpr_shadow vnmi
> flexpriority ept vpid Which contains vmx and kvm-ok is happy.
>
> Could you provide a "cat /proc/cpuinfo" of host and guest as well as
> the full commandline that libvirt generated for you?
Here is the output of cpuinfo (see attachment). And the qemu command
line generated is:

qemu-system-x86_64 -enable-kvm -name lp1561019 -S -machine \
   pc-i440fx-vivid,accel=kvm,usb=off -m 2048 -realtime \
   mlock=off -smp 2,sockets=2,cores=1,threads=1 \
   -uuid 464aa942-ff3d-4e37-b2f0-93b47b029e56 \
   -no-user-config -nodefaults -chardev \
   socket,id=charmonitor,path=/var/lib/libvirt/qemu/lp1561019.monitor,server,nowait \
   -mon chardev=charmonitor,id=monitor,mode=control \
   -rtc base=utc -no-shutdown -boot strict=on \
   -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive \
   file=/var/lib/uvtool/libvirt/images/lp1561019.qcow,if=none,id=drive-virtio-disk0,format=qcow2 \
   -device \
   virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 \
   -drive \
   file=/var/lib/uvtool/libvirt/images/lp1561019-ds.qcow,if=none,id=drive-virtio-disk1,format=raw \
   -device \
   virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk1,id=virtio-disk1 \
   -netdev tap,fd=27,id=hostnet0,vhost=on,vhostfd=28 -device \
   virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:74:40:d0,bus=pci.0,addr=0x3 \
   -chardev pty,id=charserial0 -device \
   isa-serial,chardev=charserial0,id=serial0 -vnc 127.0.0.1:1 \
   -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device \
   virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -msg timestamp=on

Also here is the xml generated by libvirt can be found at http://paste.ubuntu.com/15856773/

The command line I used to create this vm was:
$ uvt-kvm create --memory 2048 --cpu 2 lp1561019 release=trusty arch=amd64

Using virt-manager for the bootstrap of the VM doesn't produce different results.

Best,
--
Felipe Reyes
Software Sustaining Engineer @ Canonical
STS Engineering Team
# Email: <email address hidden> (GPG:0x9B1FFF39)
# Phone: +56 9 7640 7887
# Launchpad: ~freyes | IRC: freyes

Felipe Reyes (freyes)
Changed in libvirt (Ubuntu):
status: Incomplete → New
Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

I asked around for others with a CPU like that.

Until then the libvirt XML really just confirms that it should be at the defaults.
As I asked it would be great if you could find the time to check /proc/cpuinfo for flags in host and guest and post these as well?

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Actually in discussion we found the proc/cpuinfo was attached.
=> https://launchpadlibrarian.net/254000437/cpuinfo.txt

It seems it really strips away svm which I thought it shouldn't

Waiting for the cross check on another machine - a coworker said he could give it a try later on.

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Actually the cpuinfo is already attached at https://launchpadlibrarian.net/254000437/cpuinfo.txt

It really seems to strip svm.

Waiting for a cross check by a coworker later on today.

Revision history for this message
Serge Hallyn (serge-hallyn) wrote :

Actually this was done deliberately by upstream qemu:

commit 75d373ef9729bd22fbc46bfd8dcd158cbf6d9777
Author: Eduardo Habkost <email address hidden>
Date: Fri Oct 3 16:39:51 2014 -0300

    target-i386: Disable SVM by default in KVM mode

    Make SVM be disabled by default on all CPU models when in KVM mode.
    Nested SVM is enabled by default in the KVM kernel module, but it is
    probably less stable than nested VMX (which is already disabled by
    default).

    Add a new compat function, x86_cpu_compat_kvm_no_autodisable(), to keep
    compatibility on previous machine-types.

So that explains why this regressed in wily. The solution is
to add the SVM flag to the qemu64 machine type in
debian/patches/ubuntu/expose-vmx_qemu64cpu.patch

Changed in libvirt (Ubuntu):
importance: Undecided → Medium
Revision history for this message
Serge Hallyn (serge-hallyn) wrote :

I will put this on a list to upload on Friday or Monday to xenial, then SRU to wily.

Revision history for this message
Stefan Bader (smb) wrote :

I was looking around for more info about this whole area and found we already were already looking at this once (bug 1494602). Unfortunately (as so often) the reporter went just quiet and at least I forgot about it. So essentially, to make things more fun, that qemu change affects VM differently depending on the machine type used. So for VMs created in/with Trusty this is ok but with later releases the default changed and one needs force the svm/vmx flag on. The provided XML shows a machine type of "pc-i440fx-vivid". So likely when that gets changed from vivid to trusty, then nested works.

Serge, not sure where that patch you refer to is located. I cannot find it. But generally adding vmx or svm probably has to be done very carefully as those are vendor specific. Not that this makes boot fail on all Intel CPUs...

Revision history for this message
Serge Hallyn (serge-hallyn) wrote :

Ok so the patch which made this change can't be reverted bc there's been a lot of churn, but the effective equivalent of this would be to remove the svm line in:

static void pc_compat_2_1(MachineState *machine)
{
    pc_compat_2_2(machine);
    x86_cpu_change_kvm_default("svm", NULL);
}

Revision history for this message
Stefan Bader (smb) wrote :

Hm, that seems to make things even worse for me in Xenial. Even when using the trusty machine type (which did work before) I cannot make svm appear inside the guest. Downgrading back to the current version at least makes the machine type hack work...

Revision history for this message
Stefan Bader (smb) wrote :

I think I understand... x86_cpu_change_kvm_default("svm", NULL) causes the default of "off" to be removed for compat levels of 2.1 and older... So what we actually want is either move that line to be applied for all compat levels or remove them and also remove the default of "off". Like below (but I want to test this first):

Index: qemu-2.5+dfsg/hw/i386/pc_piix.c
===================================================================
--- qemu-2.5+dfsg.orig/hw/i386/pc_piix.c 2016-04-21 11:24:12.000000000 +0200
+++ qemu-2.5+dfsg/hw/i386/pc_piix.c 2016-04-21 16:03:59.421505982 +0200
@@ -325,7 +325,6 @@ static void pc_compat_2_1(MachineState *

     pc_compat_2_2(machine);
     smbios_uuid_encoded = false;
- x86_cpu_change_kvm_default("svm", NULL);
     pcms->enforce_aligned_dimm = false;
 }

Index: qemu-2.5+dfsg/hw/i386/pc_q35.c
===================================================================
--- qemu-2.5+dfsg.orig/hw/i386/pc_q35.c 2015-12-12 13:16:02.000000000 +0100
+++ qemu-2.5+dfsg/hw/i386/pc_q35.c 2016-04-21 16:04:17.725505700 +0200
@@ -309,7 +309,6 @@ static void pc_compat_2_1(MachineState *
     pc_compat_2_2(machine);
     pcms->enforce_aligned_dimm = false;
     smbios_uuid_encoded = false;
- x86_cpu_change_kvm_default("svm", NULL);
 }

 static void pc_compat_2_0(MachineState *machine)
Index: qemu-2.5+dfsg/target-i386/cpu.c
===================================================================
--- qemu-2.5+dfsg.orig/target-i386/cpu.c 2016-04-21 11:24:12.000000000 +0200
+++ qemu-2.5+dfsg/target-i386/cpu.c 2016-04-21 16:03:31.797506408 +0200
@@ -1380,7 +1380,6 @@ static PropValue kvm_default_props[] = {
     { "x2apic", "on" },
     { "acpi", "off" },
     { "monitor", "off" },
- { "svm", "off" },
     { NULL, NULL },
 };

Revision history for this message
Stefan Bader (smb) wrote :

@Serge, so I can confirm that above change will get me qemu64 cpu that supports nested on a AMD box (even with a newer machine type).

Revision history for this message
Serge Hallyn (serge-hallyn) wrote :

Thanks, Stefan!

Revision history for this message
In , Mike (mike-redhat-bugs) wrote :

This bug still crops up from time to time due to the vagrant libvirt provider defaulting to `host-model`.

You can work around it by passing box.cpu-mode = 'host-passthrough' or another compatible option when you encounter this issue. (were 'box' is the provider object)

Example:

      LV_CPU_MODE = 'host-passthrough'

      machine.vm.provider :libvirt do |lv, override|
        lv.default_prefix = TYPE_NAME
        lv.memory = MEM_SIZE
        lv.cpu_mode = LV_CPU_MODE
      end

Revision history for this message
In , Jiri (jiri-redhat-bugs) wrote :
Download full text (8.6 KiB)

This should be finally fixed by (in combination with QEMU 2.9.0):

commit 2a586b4402a7637e0bef9a2876d065c0ce6bfef1
Refs: v3.1.0-9-g2a586b440
Author: Jiri Denemark <email address hidden>
AuthorDate: Mon Jan 30 16:10:22 2017 +0100
Commit: Jiri Denemark <email address hidden>
CommitDate: Fri Mar 3 19:57:56 2017 +0100

    qemucapstest: Update test data for QEMU 2.9.0

    Signed-off-by: Jiri Denemark <email address hidden>

commit 0bde051f3de02b1be25ea4a4d9f062abfa3d1397
Refs: v3.1.0-10-g0bde051f3
Author: Jiri Denemark <email address hidden>
AuthorDate: Mon Jan 30 16:10:49 2017 +0100
Commit: Jiri Denemark <email address hidden>
CommitDate: Fri Mar 3 19:57:56 2017 +0100

    domaincapstest: Add test data for QEMU 2.9.0

    Signed-off-by: Jiri Denemark <email address hidden>

commit d2f8f3052d48f284d56e27c98ce7a2ce6c656e59
Refs: v3.1.0-11-gd2f8f3052
Author: Jiri Denemark <email address hidden>
AuthorDate: Wed Feb 15 10:18:53 2017 +0100
Commit: Jiri Denemark <email address hidden>
CommitDate: Fri Mar 3 19:57:56 2017 +0100

    docs: Update description of the host-model CPU mode

    Signed-off-by: Jiri Denemark <email address hidden>

commit 4c0723a1d75b981e8939c4c5b6bde7607fc7301e
Refs: v3.1.0-12-g4c0723a1d
Author: Jiri Denemark <email address hidden>
AuthorDate: Mon Jan 30 16:30:13 2017 +0100
Commit: Jiri Denemark <email address hidden>
CommitDate: Fri Mar 3 19:57:56 2017 +0100

    qemu: Rename hostCPU/feature element in capabilities cache

    The element will be generalized in the following commits.

    Signed-off-by: Jiri Denemark <email address hidden>

commit 03a34f6b84da009291e8651aba71df8a6761d081
Refs: v3.1.0-13-g03a34f6b8
Author: Jiri Denemark <email address hidden>
AuthorDate: Wed Feb 22 15:46:47 2017 +0100
Commit: Jiri Denemark <email address hidden>
CommitDate: Fri Mar 3 19:57:56 2017 +0100

    qemu: Prepare for more types in qemuMonitorCPUModelInfo

    Signed-off-by: Jiri Denemark <email address hidden>

commit 2fc215dd2ad4b88c1054da804c4c45b3d4e5c2fa
Refs: v3.1.0-14-g2fc215dd2
Author: Jiri Denemark <email address hidden>
AuthorDate: Wed Feb 22 16:01:30 2017 +0100
Commit: Jiri Denemark <email address hidden>
CommitDate: Fri Mar 3 19:57:56 2017 +0100

    qemu: Store more types in qemuMonitorCPUModelInfo

    While query-cpu-model-expansion returns only boolean features on s390,
    but x86_64 reports some integer and string properties which we are
    interested in.

    Signed-off-by: Jiri Denemark <email address hidden>

commit d7f054a512a911a386d9bbeec51379e4bb843ca5
Refs: v3.1.0-15-gd7f054a51
Author: Jiri Denemark <email address hidden>
AuthorDate: Wed Feb 22 16:51:50 2017 +0100
Commit: Jiri Denemark <email address hidden>
CommitDate: Fri Mar 3 19:57:57 2017 +0100

    qemu: Probe "max" CPU model in TCG

    Querying "host" CPU model expansion only makes sense for KVM. QEMU 2.9.0
    introduces a new "max" CPU model which can be used to ask QEMU what the
    best CPU it can provide to a TCG domain is.

    Signed-off-by: Jiri Denemark <email address hidden>

commit f0138289920d5204c1654bc9b17115d1a315d62e
Refs: v3.1.0-16-gf01382899
Author: Jiri Denemark <email address hidden>
AuthorDate:...

Read more...

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Hi everybody,
with this being an issue for so long but nobody than "us" (=canonical STS + canonical Dev) caring I wonder if we should even switch that on again - since it is also disabled upstream intentionally as Serge reported.

The question is if users that want/need it can enable it.
So I wonder if the following would enable it again as it should:
via libvirt
    <cpu mode'custom' match='exact'>
        <model>qemu64</model>
        <feature name='svm' policy='require'/>
    </cpu>
via cmdline
 qemu -cpu qemu64,+svm

@smb - could you test this on the AMD system when you find some time?
If it works I'd think we could close this bug without changing something.

Changed in libvirt (Ubuntu):
status: New → Incomplete
Revision history for this message
In , Adam (adam-redhat-bugs) wrote :

Note, I've filed bugs suggesting both virt-manager/virt-install and gnome-boxes should go to host-model and drop their workarounds now:

https://bugzilla.redhat.com/show_bug.cgi?id=1468016
https://bugzilla.gnome.org/show_bug.cgi?id=784573

Revision history for this message
Rafael David Tinoco (rafaeldtinoco) wrote :

Thanks for documenting this here Christian. Helped A LOT this night.

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

Given the (no) feedback and the fact that this actually is not part of the virt cpu the old request is won't fix. Furthermore I spawned a task to remove the patches that did add it in the past.

TL;DR:
- intentionally not enabled/added upstream
- long gone with a very low amount of people missing it
- still (and in future) available as opt-in by changing the config as outlined in comment #15
- will remove (disfunctional) related delta on next merge

Changed in libvirt (Ubuntu):
status: Incomplete → Won't Fix
Changed in libvirt (Ubuntu Wily):
status: New → Won't Fix
Changed in libvirt (Ubuntu Xenial):
status: New → Won't Fix
Changed in libvirt:
importance: Unknown → Undecided
status: Unknown → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.