Activity log for bug #1494350

Date Who What changed Old value New value Message
2015-09-10 14:38:10 c sights bug added bug
2015-10-06 09:51:20 Mário Reis bug added subscriber Mário Reis
2015-10-07 08:31:23 Dr. David Alan Gilbert bug added subscriber Dr. David Alan Gilbert
2015-10-15 06:50:40 Markus Breitegger bug added subscriber Markus Breitegger
2015-10-15 11:15:42 Alexandre Derumier bug watch added http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=785557
2015-11-19 14:52:06 lickdragon qemu: status New Fix Committed
2016-02-22 10:25:32 Markus Schade affects qemu linux
2016-02-27 00:32:28 Dominique Poulain bug added subscriber Dominique Poulain
2016-03-09 05:57:35 Liang Chen affects linux ubuntu
2016-03-09 05:59:46 Liang Chen affects ubuntu linux
2016-03-09 06:04:43 Liang Chen bug task added linux (Ubuntu)
2016-03-09 06:04:55 Liang Chen linux (Ubuntu): assignee Liang Chen (cbjchen)
2016-03-09 06:05:02 Liang Chen linux (Ubuntu): status New In Progress
2016-03-09 08:14:46 Liang Chen description I'm pasting in text from Debian Bug 785557 https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=785557 b/c I couldn't find this issue reported. It is present in QEMU 2.3, but I haven't tested later versions. Perhaps someone else will find this bug and confirm for later versions. (Or I will when I have time!) -------------------------------------------------------------------------------------------- Hi, I'm trying to debug an issue we're having with some debian.org machines running in QEMU 2.1.2 instances (see [1] for more background). In short, after a live migration guests running Debian Jessie (linux 3.16) stop accounting CPU time properly. /proc/stat in the guest shows no increase in user and system time anymore (regardless of workload) and what stands out are extremely large values for steal time: % cat /proc/stat cpu 2400 0 1842 650879168 2579640 0 25 136562317270 0 0 cpu0 1366 0 1028 161392988 1238598 0 11 383803090749 0 0 cpu1 294 0 240 162582008 639105 0 8 39686436048 0 0 cpu2 406 0 338 163331066 383867 0 4 333994238765 0 0 cpu3 332 0 235 163573105 318069 0 1 1223752959076 0 0 intr 355773871 33 10 0 0 0 0 3 0 1 0 0 36 144 0 0 1638612 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 5001741 41 0 8516993 0 3669582 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ctxt 837862829 btime 1431642967 processes 8529939 procs_running 1 procs_blocked 0 softirq 225193331 2 77532878 172 7250024 819289 0 54 33739135 176552 105675225 Reading the memory pointed to by the steal time MSRs pre- and post-migration, I can see that post-migration the high bytes are set to 0xff: (qemu) xp /8b 0x1fc0cfc0 000000001fc0cfc0: 0x94 0x57 0x77 0xf5 0xff 0xff 0xff 0xff The "jump" in steal time happens when the guest is resumed on the receiving side. I've also been able to consistently reproduce this on a Ganeti cluster at work, using QEMU 2.1.3 and kernels 3.16 and 4.0 in the guests. The issue goes away if I disable the steal time MSR using `-cpu qemu64,-kvm_steal_time`. So, it looks to me as if the steal time MSR is not set/copied properly during live migration, although AFAICT this should be the case after 917367aa968fd4fef29d340e0c7ec8c608dffaab. After investigating a bit more, it looks like the issue comes from an overflow in the kernel's accumulate_steal_time() (arch/x86/kvm/x86.c:2023): static void accumulate_steal_time(struct kvm_vcpu *vcpu) { u64 delta; if (!(vcpu->arch.st.msr_val & KVM_MSR_ENABLED)) return; delta = current->sched_info.run_delay - vcpu->arch.st.last_steal; Using systemtap with the attached script to trace KVM execution on the receiving host kernel, we can see that shortly before marking the vCPUs as runnable on a migrated KVM instance with 2 vCPUs, the following happens (** marks lines of interest): ** 0 qemu-system-x86(18446): kvm_arch_vcpu_load: run_delay=7856949 ns steal=7856949 ns 0 qemu-system-x86(18446): -> kvm_arch_vcpu_load 0 vhost-18446(18447): -> kvm_arch_vcpu_should_kick 5 vhost-18446(18447): <- kvm_arch_vcpu_should_kick 23 qemu-system-x86(18446): <- kvm_arch_vcpu_load 0 qemu-system-x86(18446): -> kvm_arch_vcpu_ioctl 2 qemu-system-x86(18446): <- kvm_arch_vcpu_ioctl 0 qemu-system-x86(18446): -> kvm_arch_vcpu_put 2 qemu-system-x86(18446): -> kvm_put_guest_fpu 3 qemu-system-x86(18446): <- kvm_put_guest_fpu 4 qemu-system-x86(18446): <- kvm_arch_vcpu_put ** 0 qemu-system-x86(18446): kvm_arch_vcpu_load: run_delay=7856949 ns steal=7856949 ns 0 qemu-system-x86(18446): -> kvm_arch_vcpu_load 1 qemu-system-x86(18446): <- kvm_arch_vcpu_load 0 qemu-system-x86(18446): -> kvm_arch_vcpu_ioctl 1 qemu-system-x86(18446): <- kvm_arch_vcpu_ioctl 0 qemu-system-x86(18446): -> kvm_arch_vcpu_put 1 qemu-system-x86(18446): -> kvm_put_guest_fpu 2 qemu-system-x86(18446): <- kvm_put_guest_fpu 3 qemu-system-x86(18446): <- kvm_arch_vcpu_put ** 0 qemu-system-x86(18449): kvm_arch_vcpu_load: run_delay=40304 ns steal=7856949 ns 0 qemu-system-x86(18449): -> kvm_arch_vcpu_load ** 7 qemu-system-x86(18449): delta: 18446744073701734971 ns, steal=7856949 ns, run_delay=40304 ns 10 qemu-system-x86(18449): <- kvm_arch_vcpu_load ** 0 qemu-system-x86(18449): -> kvm_arch_vcpu_ioctl_run 4 qemu-system-x86(18449): -> kvm_arch_vcpu_runnable 6 qemu-system-x86(18449): <- kvm_arch_vcpu_runnable ... 0 qemu-system-x86(18448): kvm_arch_vcpu_load: run_delay=0 ns steal=7856949 ns 0 qemu-system-x86(18448): -> kvm_arch_vcpu_load ** 34 qemu-system-x86(18448): delta: 18446744073701694667 ns, steal=7856949 ns, run_delay=0 ns 40 qemu-system-x86(18448): <- kvm_arch_vcpu_load ** 0 qemu-system-x86(18448): -> kvm_arch_vcpu_ioctl_run 5 qemu-system-x86(18448): -> kvm_arch_vcpu_runnable Now, what's really interesting is that current->sched_info.run_delay gets reset because the tasks (threads) using the vCPUs change, and thus have a different current->sched_info: it looks like task 18446 created the two vCPUs, and then they were handed over to 18448 and 18449 respectively. This is also verified by the fact that during the overflow, both vCPUs have the old steal time of the last vcpu_load of task 18446. However, according to Documentation/virtual/kvm/api.txt: - vcpu ioctls: These query and set attributes that control the operation of a single virtual cpu. Only run vcpu ioctls from the same thread that was used to create the vcpu. So it seems qemu is doing something that it shouldn't: calling vCPU ioctls from a thread that didn't create the vCPU. Note that this probably happens on every QEMU startup, but is not visible because the guest kernel zeroes out the steal time on boot. There are at least two ways to mitigate the issue without a kernel recompilation: - The first one is to disable the steal time propagation from host to guest by invoking qemu with `-cpu qemu64,-kvm_steal_time`. This will short-circuit accumulate_steal_time() due to (vcpu->arch.st.msr_val & KVM_MSR_ENABLED) and will completely disable steal time reporting in the guest, which may not be desired if people rely on it to detect CPU congestion. - The other one is using the following systemtap script to prevent the steal time counter from overflowing by dropping the problematic samples (WARNING: systemtap guru mode required, use at your own risk): probe module("kvm").statement("*@arch/x86/kvm/x86.c:2024") { if (@defined($delta) && $delta < 0) { printk(4, "kvm: steal time delta < 0, dropping") $delta = 0 } } Note that not all *guests* handle this condition in the same way: 3.2 guests still get the overflow in /proc/stat, but their scheduler continues to work as expected. 3.16 guests OTOH go nuts once steal time overflows and stop accumulating system & user time, while entering an erratic state where steal time in /proc/stat is *decreasing* on every clock tick. -------------------------------------------- Revised statement: > Now, what's really interesting is that current->sched_info.run_delay > gets reset because the tasks (threads) using the vCPUs change, and > thus have a different current->sched_info: it looks like task 18446 > created the two vCPUs, and then they were handed over to 18448 and > 18449 respectively. This is also verified by the fact that during the > overflow, both vCPUs have the old steal time of the last vcpu_load of > task 18446. However, according to Documentation/virtual/kvm/api.txt: The above is not entirely accurate: the vCPUs were created by the threads that are used to run them (18448 and 18449 respectively), it's just that the main thread is issuing ioctls during initialization, as illustrated by the strace output on a different process: [ vCPU #0 thread creating vCPU #0 (fd 20) ] [pid 1861] ioctl(14, KVM_CREATE_VCPU, 0) = 20 [pid 1861] ioctl(20, KVM_X86_SETUP_MCE, 0x7fbd3ca40cd8) = 0 [pid 1861] ioctl(20, KVM_SET_CPUID2, 0x7fbd3ca40ce0) = 0 [pid 1861] ioctl(20, KVM_SET_SIGNAL_MASK, 0x7fbd380008f0) = 0 [ vCPU #1 thread creating vCPU #1 (fd 21) ] [pid 1862] ioctl(14, KVM_CREATE_VCPU, 0x1) = 21 [pid 1862] ioctl(21, KVM_X86_SETUP_MCE, 0x7fbd37ffdcd8) = 0 [pid 1862] ioctl(21, KVM_SET_CPUID2, 0x7fbd37ffdce0) = 0 [pid 1862] ioctl(21, KVM_SET_SIGNAL_MASK, 0x7fbd300008f0) = 0 [ Main thread calling kvm_arch_put_registers() on vCPU #0 ] [pid 1859] ioctl(20, KVM_SET_REGS, 0x7ffc98aac230) = 0 [pid 1859] ioctl(20, KVM_SET_XSAVE or KVM_SIGNAL_MSI, 0x7fbd38001000) = 0 [pid 1859] ioctl(20, KVM_PPC_ALLOCATE_HTAB or KVM_SET_XCRS, 0x7ffc98aac010) = 0 [pid 1859] ioctl(20, KVM_SET_SREGS, 0x7ffc98aac050) = 0 [pid 1859] ioctl(20, KVM_SET_MSRS, 0x7ffc98aab820) = 87 [pid 1859] ioctl(20, KVM_SET_MP_STATE, 0x7ffc98aac230) = 0 [pid 1859] ioctl(20, KVM_SET_LAPIC, 0x7ffc98aabd80) = 0 [pid 1859] ioctl(20, KVM_SET_MSRS, 0x7ffc98aac1b0) = 1 [pid 1859] ioctl(20, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS, 0x7ffc98aac1b0) = 0 [pid 1859] ioctl(20, KVM_SET_DEBUGREGS or KVM_SET_TSC_KHZ, 0x7ffc98aac1b0) = 0 [ Main thread calling kvm_arch_put_registers() on vCPU #1 ] [pid 1859] ioctl(21, KVM_SET_REGS, 0x7ffc98aac230) = 0 [pid 1859] ioctl(21, KVM_SET_XSAVE or KVM_SIGNAL_MSI, 0x7fbd30001000) = 0 [pid 1859] ioctl(21, KVM_PPC_ALLOCATE_HTAB or KVM_SET_XCRS, 0x7ffc98aac010) = 0 [pid 1859] ioctl(21, KVM_SET_SREGS, 0x7ffc98aac050) = 0 [pid 1859] ioctl(21, KVM_SET_MSRS, 0x7ffc98aab820) = 87 [pid 1859] ioctl(21, KVM_SET_MP_STATE, 0x7ffc98aac230) = 0 [pid 1859] ioctl(21, KVM_SET_LAPIC, 0x7ffc98aabd80) = 0 [pid 1859] ioctl(21, KVM_SET_MSRS, 0x7ffc98aac1b0) = 1 [pid 1859] ioctl(21, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS, 0x7ffc98aac1b0) = 0 [pid 1859] ioctl(21, KVM_SET_DEBUGREGS or KVM_SET_TSC_KHZ, 0x7ffc98aac1b0) = 0 Using systemtap again, I noticed that the main thread's run_delay is copied to last_steal only after a KVM_SET_MSRS ioctl which enables the steal time MSR is issued by the main thread (see linux 3.16.7-ckt11-1/arch/x86/kvm/x86.c:2162). Taking an educated guess, I reverted the following qemu commits: commit 0e5035776df31380a44a1a851850d110b551ecb6 Author: Marcelo Tosatti <mtosatti@redhat.com> Date: Tue Sep 3 18:55:16 2013 -0300 fix steal time MSR vmsd callback to proper opaque type Convert steal time MSR vmsd callback pointer to proper X86CPU type. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> commit 917367aa968fd4fef29d340e0c7ec8c608dffaab Author: Marcelo Tosatti <mtosatti@redhat.com> Date: Tue Feb 19 23:27:20 2013 -0300 target-i386: kvm: save/restore steal time MSR Read and write steal time MSR, so that reporting is functional across migration. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Gleb Natapov <gleb@redhat.com> and the steal time jump on migration went away. However, steal time was not reported at all after migration, which is expected after reverting 917367aa. So it seems that after 917367aa, the steal time MSR is correctly saved and copied to the receiving side, but then it is restored by the main thread (probably during cpu_synchronize_all_post_init()), causing the overflow when the vCPU threads are unpaused. Testing Steps with patched trusty kernel: Using savemv & loadvm to simulate the migration process. In guest: 1. check steal_time data location rdmsr 0x4b564d03 <----- returns the start address 0x7fc0d000 2. Exam the steal time before savevm in qemu monitor (qemu) xp /ug 0x7fc0d000 xp /ug 0x7fc0d000 000000007fc0d000: 144139048 <---- steal time value before savevm (qemu) savevm mytestvm7 savevm mytestvm7 (qemu) quit quit 3. give some load to the host system, e.g. stress --cpu <XXX> 4. start the guest with "-loadvm mytestvm7" to restore the state of the VM, thus the steal_time MSR (qemu) xp /ug 0x7fc0d000 xp /ug 0x7fc0d000 000000007fc0d000: 147428917 <---- with high cpu load after loadvm, steal time still shows a linear increase The steal_time value would go backward (because of the overflow) after the restoration of the VM state without the fix. ----------------------------------------------------------------------------- I'm pasting in text from Debian Bug 785557 https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=785557 b/c I couldn't find this issue reported. It is present in QEMU 2.3, but I haven't tested later versions. Perhaps someone else will find this bug and confirm for later versions. (Or I will when I have time!) -------------------------------------------------------------------------------------------- Hi, I'm trying to debug an issue we're having with some debian.org machines running in QEMU 2.1.2 instances (see [1] for more background). In short, after a live migration guests running Debian Jessie (linux 3.16) stop accounting CPU time properly. /proc/stat in the guest shows no increase in user and system time anymore (regardless of workload) and what stands out are extremely large values for steal time:  % cat /proc/stat  cpu 2400 0 1842 650879168 2579640 0 25 136562317270 0 0  cpu0 1366 0 1028 161392988 1238598 0 11 383803090749 0 0  cpu1 294 0 240 162582008 639105 0 8 39686436048 0 0  cpu2 406 0 338 163331066 383867 0 4 333994238765 0 0  cpu3 332 0 235 163573105 318069 0 1 1223752959076 0 0  intr 355773871 33 10 0 0 0 0 3 0 1 0 0 36 144 0 0 1638612 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 5001741 41 0 8516993 0 3669582 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0  ctxt 837862829  btime 1431642967  processes 8529939  procs_running 1  procs_blocked 0  softirq 225193331 2 77532878 172 7250024 819289 0 54 33739135 176552 105675225 Reading the memory pointed to by the steal time MSRs pre- and post-migration, I can see that post-migration the high bytes are set to 0xff: (qemu) xp /8b 0x1fc0cfc0 000000001fc0cfc0: 0x94 0x57 0x77 0xf5 0xff 0xff 0xff 0xff The "jump" in steal time happens when the guest is resumed on the receiving side. I've also been able to consistently reproduce this on a Ganeti cluster at work, using QEMU 2.1.3 and kernels 3.16 and 4.0 in the guests. The issue goes away if I disable the steal time MSR using `-cpu qemu64,-kvm_steal_time`. So, it looks to me as if the steal time MSR is not set/copied properly during live migration, although AFAICT this should be the case after 917367aa968fd4fef29d340e0c7ec8c608dffaab. After investigating a bit more, it looks like the issue comes from an overflow in the kernel's accumulate_steal_time() (arch/x86/kvm/x86.c:2023):   static void accumulate_steal_time(struct kvm_vcpu *vcpu)   {           u64 delta;           if (!(vcpu->arch.st.msr_val & KVM_MSR_ENABLED))                   return;           delta = current->sched_info.run_delay - vcpu->arch.st.last_steal; Using systemtap with the attached script to trace KVM execution on the receiving host kernel, we can see that shortly before marking the vCPUs as runnable on a migrated KVM instance with 2 vCPUs, the following happens (** marks lines of interest):  ** 0 qemu-system-x86(18446): kvm_arch_vcpu_load: run_delay=7856949 ns steal=7856949 ns      0 qemu-system-x86(18446): -> kvm_arch_vcpu_load      0 vhost-18446(18447): -> kvm_arch_vcpu_should_kick      5 vhost-18446(18447): <- kvm_arch_vcpu_should_kick     23 qemu-system-x86(18446): <- kvm_arch_vcpu_load      0 qemu-system-x86(18446): -> kvm_arch_vcpu_ioctl      2 qemu-system-x86(18446): <- kvm_arch_vcpu_ioctl      0 qemu-system-x86(18446): -> kvm_arch_vcpu_put      2 qemu-system-x86(18446): -> kvm_put_guest_fpu      3 qemu-system-x86(18446): <- kvm_put_guest_fpu      4 qemu-system-x86(18446): <- kvm_arch_vcpu_put  ** 0 qemu-system-x86(18446): kvm_arch_vcpu_load: run_delay=7856949 ns steal=7856949 ns      0 qemu-system-x86(18446): -> kvm_arch_vcpu_load      1 qemu-system-x86(18446): <- kvm_arch_vcpu_load      0 qemu-system-x86(18446): -> kvm_arch_vcpu_ioctl      1 qemu-system-x86(18446): <- kvm_arch_vcpu_ioctl      0 qemu-system-x86(18446): -> kvm_arch_vcpu_put      1 qemu-system-x86(18446): -> kvm_put_guest_fpu      2 qemu-system-x86(18446): <- kvm_put_guest_fpu      3 qemu-system-x86(18446): <- kvm_arch_vcpu_put  ** 0 qemu-system-x86(18449): kvm_arch_vcpu_load: run_delay=40304 ns steal=7856949 ns      0 qemu-system-x86(18449): -> kvm_arch_vcpu_load  ** 7 qemu-system-x86(18449): delta: 18446744073701734971 ns, steal=7856949 ns, run_delay=40304 ns     10 qemu-system-x86(18449): <- kvm_arch_vcpu_load  ** 0 qemu-system-x86(18449): -> kvm_arch_vcpu_ioctl_run      4 qemu-system-x86(18449): -> kvm_arch_vcpu_runnable      6 qemu-system-x86(18449): <- kvm_arch_vcpu_runnable      ...      0 qemu-system-x86(18448): kvm_arch_vcpu_load: run_delay=0 ns steal=7856949 ns      0 qemu-system-x86(18448): -> kvm_arch_vcpu_load  ** 34 qemu-system-x86(18448): delta: 18446744073701694667 ns, steal=7856949 ns, run_delay=0 ns     40 qemu-system-x86(18448): <- kvm_arch_vcpu_load  ** 0 qemu-system-x86(18448): -> kvm_arch_vcpu_ioctl_run      5 qemu-system-x86(18448): -> kvm_arch_vcpu_runnable Now, what's really interesting is that current->sched_info.run_delay gets reset because the tasks (threads) using the vCPUs change, and thus have a different current->sched_info: it looks like task 18446 created the two vCPUs, and then they were handed over to 18448 and 18449 respectively. This is also verified by the fact that during the overflow, both vCPUs have the old steal time of the last vcpu_load of task 18446. However, according to Documentation/virtual/kvm/api.txt:  - vcpu ioctls: These query and set attributes that control the operation    of a single virtual cpu.    Only run vcpu ioctls from the same thread that was used to create the vcpu. So it seems qemu is doing something that it shouldn't: calling vCPU ioctls from a thread that didn't create the vCPU. Note that this probably happens on every QEMU startup, but is not visible because the guest kernel zeroes out the steal time on boot. There are at least two ways to mitigate the issue without a kernel recompilation:  - The first one is to disable the steal time propagation from host to    guest by invoking qemu with `-cpu qemu64,-kvm_steal_time`. This will    short-circuit accumulate_steal_time() due to (vcpu->arch.st.msr_val &    KVM_MSR_ENABLED) and will completely disable steal time reporting in    the guest, which may not be desired if people rely on it to detect    CPU congestion.  - The other one is using the following systemtap script to prevent the    steal time counter from overflowing by dropping the problematic    samples (WARNING: systemtap guru mode required, use at your own    risk):       probe module("kvm").statement("*@arch/x86/kvm/x86.c:2024") {         if (@defined($delta) && $delta < 0) {           printk(4, "kvm: steal time delta < 0, dropping")           $delta = 0         }       } Note that not all *guests* handle this condition in the same way: 3.2 guests still get the overflow in /proc/stat, but their scheduler continues to work as expected. 3.16 guests OTOH go nuts once steal time overflows and stop accumulating system & user time, while entering an erratic state where steal time in /proc/stat is *decreasing* on every clock tick. -------------------------------------------- Revised statement: > Now, what's really interesting is that current->sched_info.run_delay > gets reset because the tasks (threads) using the vCPUs change, and > thus have a different current->sched_info: it looks like task 18446 > created the two vCPUs, and then they were handed over to 18448 and > 18449 respectively. This is also verified by the fact that during the > overflow, both vCPUs have the old steal time of the last vcpu_load of > task 18446. However, according to Documentation/virtual/kvm/api.txt: The above is not entirely accurate: the vCPUs were created by the threads that are used to run them (18448 and 18449 respectively), it's just that the main thread is issuing ioctls during initialization, as illustrated by the strace output on a different process:  [ vCPU #0 thread creating vCPU #0 (fd 20) ]  [pid 1861] ioctl(14, KVM_CREATE_VCPU, 0) = 20  [pid 1861] ioctl(20, KVM_X86_SETUP_MCE, 0x7fbd3ca40cd8) = 0  [pid 1861] ioctl(20, KVM_SET_CPUID2, 0x7fbd3ca40ce0) = 0  [pid 1861] ioctl(20, KVM_SET_SIGNAL_MASK, 0x7fbd380008f0) = 0  [ vCPU #1 thread creating vCPU #1 (fd 21) ]  [pid 1862] ioctl(14, KVM_CREATE_VCPU, 0x1) = 21  [pid 1862] ioctl(21, KVM_X86_SETUP_MCE, 0x7fbd37ffdcd8) = 0  [pid 1862] ioctl(21, KVM_SET_CPUID2, 0x7fbd37ffdce0) = 0  [pid 1862] ioctl(21, KVM_SET_SIGNAL_MASK, 0x7fbd300008f0) = 0  [ Main thread calling kvm_arch_put_registers() on vCPU #0 ]  [pid 1859] ioctl(20, KVM_SET_REGS, 0x7ffc98aac230) = 0  [pid 1859] ioctl(20, KVM_SET_XSAVE or KVM_SIGNAL_MSI, 0x7fbd38001000) = 0  [pid 1859] ioctl(20, KVM_PPC_ALLOCATE_HTAB or KVM_SET_XCRS, 0x7ffc98aac010) = 0  [pid 1859] ioctl(20, KVM_SET_SREGS, 0x7ffc98aac050) = 0  [pid 1859] ioctl(20, KVM_SET_MSRS, 0x7ffc98aab820) = 87  [pid 1859] ioctl(20, KVM_SET_MP_STATE, 0x7ffc98aac230) = 0  [pid 1859] ioctl(20, KVM_SET_LAPIC, 0x7ffc98aabd80) = 0  [pid 1859] ioctl(20, KVM_SET_MSRS, 0x7ffc98aac1b0) = 1  [pid 1859] ioctl(20, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS, 0x7ffc98aac1b0) = 0  [pid 1859] ioctl(20, KVM_SET_DEBUGREGS or KVM_SET_TSC_KHZ, 0x7ffc98aac1b0) = 0  [ Main thread calling kvm_arch_put_registers() on vCPU #1 ]  [pid 1859] ioctl(21, KVM_SET_REGS, 0x7ffc98aac230) = 0  [pid 1859] ioctl(21, KVM_SET_XSAVE or KVM_SIGNAL_MSI, 0x7fbd30001000) = 0  [pid 1859] ioctl(21, KVM_PPC_ALLOCATE_HTAB or KVM_SET_XCRS, 0x7ffc98aac010) = 0  [pid 1859] ioctl(21, KVM_SET_SREGS, 0x7ffc98aac050) = 0  [pid 1859] ioctl(21, KVM_SET_MSRS, 0x7ffc98aab820) = 87  [pid 1859] ioctl(21, KVM_SET_MP_STATE, 0x7ffc98aac230) = 0  [pid 1859] ioctl(21, KVM_SET_LAPIC, 0x7ffc98aabd80) = 0  [pid 1859] ioctl(21, KVM_SET_MSRS, 0x7ffc98aac1b0) = 1  [pid 1859] ioctl(21, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS, 0x7ffc98aac1b0) = 0  [pid 1859] ioctl(21, KVM_SET_DEBUGREGS or KVM_SET_TSC_KHZ, 0x7ffc98aac1b0) = 0 Using systemtap again, I noticed that the main thread's run_delay is copied to last_steal only after a KVM_SET_MSRS ioctl which enables the steal time MSR is issued by the main thread (see linux 3.16.7-ckt11-1/arch/x86/kvm/x86.c:2162). Taking an educated guess, I reverted the following qemu commits:  commit 0e5035776df31380a44a1a851850d110b551ecb6  Author: Marcelo Tosatti <mtosatti@redhat.com>  Date: Tue Sep 3 18:55:16 2013 -0300      fix steal time MSR vmsd callback to proper opaque type      Convert steal time MSR vmsd callback pointer to proper X86CPU type.      Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>  commit 917367aa968fd4fef29d340e0c7ec8c608dffaab  Author: Marcelo Tosatti <mtosatti@redhat.com>  Date: Tue Feb 19 23:27:20 2013 -0300      target-i386: kvm: save/restore steal time MSR      Read and write steal time MSR, so that reporting is functional across      migration.      Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>      Signed-off-by: Gleb Natapov <gleb@redhat.com> and the steal time jump on migration went away. However, steal time was not reported at all after migration, which is expected after reverting 917367aa. So it seems that after 917367aa, the steal time MSR is correctly saved and copied to the receiving side, but then it is restored by the main thread (probably during cpu_synchronize_all_post_init()), causing the overflow when the vCPU threads are unpaused.
2016-03-09 08:15:45 Liang Chen description Testing Steps with patched trusty kernel: Using savemv & loadvm to simulate the migration process. In guest: 1. check steal_time data location rdmsr 0x4b564d03 <----- returns the start address 0x7fc0d000 2. Exam the steal time before savevm in qemu monitor (qemu) xp /ug 0x7fc0d000 xp /ug 0x7fc0d000 000000007fc0d000: 144139048 <---- steal time value before savevm (qemu) savevm mytestvm7 savevm mytestvm7 (qemu) quit quit 3. give some load to the host system, e.g. stress --cpu <XXX> 4. start the guest with "-loadvm mytestvm7" to restore the state of the VM, thus the steal_time MSR (qemu) xp /ug 0x7fc0d000 xp /ug 0x7fc0d000 000000007fc0d000: 147428917 <---- with high cpu load after loadvm, steal time still shows a linear increase The steal_time value would go backward (because of the overflow) after the restoration of the VM state without the fix. ----------------------------------------------------------------------------- I'm pasting in text from Debian Bug 785557 https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=785557 b/c I couldn't find this issue reported. It is present in QEMU 2.3, but I haven't tested later versions. Perhaps someone else will find this bug and confirm for later versions. (Or I will when I have time!) -------------------------------------------------------------------------------------------- Hi, I'm trying to debug an issue we're having with some debian.org machines running in QEMU 2.1.2 instances (see [1] for more background). In short, after a live migration guests running Debian Jessie (linux 3.16) stop accounting CPU time properly. /proc/stat in the guest shows no increase in user and system time anymore (regardless of workload) and what stands out are extremely large values for steal time:  % cat /proc/stat  cpu 2400 0 1842 650879168 2579640 0 25 136562317270 0 0  cpu0 1366 0 1028 161392988 1238598 0 11 383803090749 0 0  cpu1 294 0 240 162582008 639105 0 8 39686436048 0 0  cpu2 406 0 338 163331066 383867 0 4 333994238765 0 0  cpu3 332 0 235 163573105 318069 0 1 1223752959076 0 0  intr 355773871 33 10 0 0 0 0 3 0 1 0 0 36 144 0 0 1638612 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 5001741 41 0 8516993 0 3669582 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0  ctxt 837862829  btime 1431642967  processes 8529939  procs_running 1  procs_blocked 0  softirq 225193331 2 77532878 172 7250024 819289 0 54 33739135 176552 105675225 Reading the memory pointed to by the steal time MSRs pre- and post-migration, I can see that post-migration the high bytes are set to 0xff: (qemu) xp /8b 0x1fc0cfc0 000000001fc0cfc0: 0x94 0x57 0x77 0xf5 0xff 0xff 0xff 0xff The "jump" in steal time happens when the guest is resumed on the receiving side. I've also been able to consistently reproduce this on a Ganeti cluster at work, using QEMU 2.1.3 and kernels 3.16 and 4.0 in the guests. The issue goes away if I disable the steal time MSR using `-cpu qemu64,-kvm_steal_time`. So, it looks to me as if the steal time MSR is not set/copied properly during live migration, although AFAICT this should be the case after 917367aa968fd4fef29d340e0c7ec8c608dffaab. After investigating a bit more, it looks like the issue comes from an overflow in the kernel's accumulate_steal_time() (arch/x86/kvm/x86.c:2023):   static void accumulate_steal_time(struct kvm_vcpu *vcpu)   {           u64 delta;           if (!(vcpu->arch.st.msr_val & KVM_MSR_ENABLED))                   return;           delta = current->sched_info.run_delay - vcpu->arch.st.last_steal; Using systemtap with the attached script to trace KVM execution on the receiving host kernel, we can see that shortly before marking the vCPUs as runnable on a migrated KVM instance with 2 vCPUs, the following happens (** marks lines of interest):  ** 0 qemu-system-x86(18446): kvm_arch_vcpu_load: run_delay=7856949 ns steal=7856949 ns      0 qemu-system-x86(18446): -> kvm_arch_vcpu_load      0 vhost-18446(18447): -> kvm_arch_vcpu_should_kick      5 vhost-18446(18447): <- kvm_arch_vcpu_should_kick     23 qemu-system-x86(18446): <- kvm_arch_vcpu_load      0 qemu-system-x86(18446): -> kvm_arch_vcpu_ioctl      2 qemu-system-x86(18446): <- kvm_arch_vcpu_ioctl      0 qemu-system-x86(18446): -> kvm_arch_vcpu_put      2 qemu-system-x86(18446): -> kvm_put_guest_fpu      3 qemu-system-x86(18446): <- kvm_put_guest_fpu      4 qemu-system-x86(18446): <- kvm_arch_vcpu_put  ** 0 qemu-system-x86(18446): kvm_arch_vcpu_load: run_delay=7856949 ns steal=7856949 ns      0 qemu-system-x86(18446): -> kvm_arch_vcpu_load      1 qemu-system-x86(18446): <- kvm_arch_vcpu_load      0 qemu-system-x86(18446): -> kvm_arch_vcpu_ioctl      1 qemu-system-x86(18446): <- kvm_arch_vcpu_ioctl      0 qemu-system-x86(18446): -> kvm_arch_vcpu_put      1 qemu-system-x86(18446): -> kvm_put_guest_fpu      2 qemu-system-x86(18446): <- kvm_put_guest_fpu      3 qemu-system-x86(18446): <- kvm_arch_vcpu_put  ** 0 qemu-system-x86(18449): kvm_arch_vcpu_load: run_delay=40304 ns steal=7856949 ns      0 qemu-system-x86(18449): -> kvm_arch_vcpu_load  ** 7 qemu-system-x86(18449): delta: 18446744073701734971 ns, steal=7856949 ns, run_delay=40304 ns     10 qemu-system-x86(18449): <- kvm_arch_vcpu_load  ** 0 qemu-system-x86(18449): -> kvm_arch_vcpu_ioctl_run      4 qemu-system-x86(18449): -> kvm_arch_vcpu_runnable      6 qemu-system-x86(18449): <- kvm_arch_vcpu_runnable      ...      0 qemu-system-x86(18448): kvm_arch_vcpu_load: run_delay=0 ns steal=7856949 ns      0 qemu-system-x86(18448): -> kvm_arch_vcpu_load  ** 34 qemu-system-x86(18448): delta: 18446744073701694667 ns, steal=7856949 ns, run_delay=0 ns     40 qemu-system-x86(18448): <- kvm_arch_vcpu_load  ** 0 qemu-system-x86(18448): -> kvm_arch_vcpu_ioctl_run      5 qemu-system-x86(18448): -> kvm_arch_vcpu_runnable Now, what's really interesting is that current->sched_info.run_delay gets reset because the tasks (threads) using the vCPUs change, and thus have a different current->sched_info: it looks like task 18446 created the two vCPUs, and then they were handed over to 18448 and 18449 respectively. This is also verified by the fact that during the overflow, both vCPUs have the old steal time of the last vcpu_load of task 18446. However, according to Documentation/virtual/kvm/api.txt:  - vcpu ioctls: These query and set attributes that control the operation    of a single virtual cpu.    Only run vcpu ioctls from the same thread that was used to create the vcpu. So it seems qemu is doing something that it shouldn't: calling vCPU ioctls from a thread that didn't create the vCPU. Note that this probably happens on every QEMU startup, but is not visible because the guest kernel zeroes out the steal time on boot. There are at least two ways to mitigate the issue without a kernel recompilation:  - The first one is to disable the steal time propagation from host to    guest by invoking qemu with `-cpu qemu64,-kvm_steal_time`. This will    short-circuit accumulate_steal_time() due to (vcpu->arch.st.msr_val &    KVM_MSR_ENABLED) and will completely disable steal time reporting in    the guest, which may not be desired if people rely on it to detect    CPU congestion.  - The other one is using the following systemtap script to prevent the    steal time counter from overflowing by dropping the problematic    samples (WARNING: systemtap guru mode required, use at your own    risk):       probe module("kvm").statement("*@arch/x86/kvm/x86.c:2024") {         if (@defined($delta) && $delta < 0) {           printk(4, "kvm: steal time delta < 0, dropping")           $delta = 0         }       } Note that not all *guests* handle this condition in the same way: 3.2 guests still get the overflow in /proc/stat, but their scheduler continues to work as expected. 3.16 guests OTOH go nuts once steal time overflows and stop accumulating system & user time, while entering an erratic state where steal time in /proc/stat is *decreasing* on every clock tick. -------------------------------------------- Revised statement: > Now, what's really interesting is that current->sched_info.run_delay > gets reset because the tasks (threads) using the vCPUs change, and > thus have a different current->sched_info: it looks like task 18446 > created the two vCPUs, and then they were handed over to 18448 and > 18449 respectively. This is also verified by the fact that during the > overflow, both vCPUs have the old steal time of the last vcpu_load of > task 18446. However, according to Documentation/virtual/kvm/api.txt: The above is not entirely accurate: the vCPUs were created by the threads that are used to run them (18448 and 18449 respectively), it's just that the main thread is issuing ioctls during initialization, as illustrated by the strace output on a different process:  [ vCPU #0 thread creating vCPU #0 (fd 20) ]  [pid 1861] ioctl(14, KVM_CREATE_VCPU, 0) = 20  [pid 1861] ioctl(20, KVM_X86_SETUP_MCE, 0x7fbd3ca40cd8) = 0  [pid 1861] ioctl(20, KVM_SET_CPUID2, 0x7fbd3ca40ce0) = 0  [pid 1861] ioctl(20, KVM_SET_SIGNAL_MASK, 0x7fbd380008f0) = 0  [ vCPU #1 thread creating vCPU #1 (fd 21) ]  [pid 1862] ioctl(14, KVM_CREATE_VCPU, 0x1) = 21  [pid 1862] ioctl(21, KVM_X86_SETUP_MCE, 0x7fbd37ffdcd8) = 0  [pid 1862] ioctl(21, KVM_SET_CPUID2, 0x7fbd37ffdce0) = 0  [pid 1862] ioctl(21, KVM_SET_SIGNAL_MASK, 0x7fbd300008f0) = 0  [ Main thread calling kvm_arch_put_registers() on vCPU #0 ]  [pid 1859] ioctl(20, KVM_SET_REGS, 0x7ffc98aac230) = 0  [pid 1859] ioctl(20, KVM_SET_XSAVE or KVM_SIGNAL_MSI, 0x7fbd38001000) = 0  [pid 1859] ioctl(20, KVM_PPC_ALLOCATE_HTAB or KVM_SET_XCRS, 0x7ffc98aac010) = 0  [pid 1859] ioctl(20, KVM_SET_SREGS, 0x7ffc98aac050) = 0  [pid 1859] ioctl(20, KVM_SET_MSRS, 0x7ffc98aab820) = 87  [pid 1859] ioctl(20, KVM_SET_MP_STATE, 0x7ffc98aac230) = 0  [pid 1859] ioctl(20, KVM_SET_LAPIC, 0x7ffc98aabd80) = 0  [pid 1859] ioctl(20, KVM_SET_MSRS, 0x7ffc98aac1b0) = 1  [pid 1859] ioctl(20, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS, 0x7ffc98aac1b0) = 0  [pid 1859] ioctl(20, KVM_SET_DEBUGREGS or KVM_SET_TSC_KHZ, 0x7ffc98aac1b0) = 0  [ Main thread calling kvm_arch_put_registers() on vCPU #1 ]  [pid 1859] ioctl(21, KVM_SET_REGS, 0x7ffc98aac230) = 0  [pid 1859] ioctl(21, KVM_SET_XSAVE or KVM_SIGNAL_MSI, 0x7fbd30001000) = 0  [pid 1859] ioctl(21, KVM_PPC_ALLOCATE_HTAB or KVM_SET_XCRS, 0x7ffc98aac010) = 0  [pid 1859] ioctl(21, KVM_SET_SREGS, 0x7ffc98aac050) = 0  [pid 1859] ioctl(21, KVM_SET_MSRS, 0x7ffc98aab820) = 87  [pid 1859] ioctl(21, KVM_SET_MP_STATE, 0x7ffc98aac230) = 0  [pid 1859] ioctl(21, KVM_SET_LAPIC, 0x7ffc98aabd80) = 0  [pid 1859] ioctl(21, KVM_SET_MSRS, 0x7ffc98aac1b0) = 1  [pid 1859] ioctl(21, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS, 0x7ffc98aac1b0) = 0  [pid 1859] ioctl(21, KVM_SET_DEBUGREGS or KVM_SET_TSC_KHZ, 0x7ffc98aac1b0) = 0 Using systemtap again, I noticed that the main thread's run_delay is copied to last_steal only after a KVM_SET_MSRS ioctl which enables the steal time MSR is issued by the main thread (see linux 3.16.7-ckt11-1/arch/x86/kvm/x86.c:2162). Taking an educated guess, I reverted the following qemu commits:  commit 0e5035776df31380a44a1a851850d110b551ecb6  Author: Marcelo Tosatti <mtosatti@redhat.com>  Date: Tue Sep 3 18:55:16 2013 -0300      fix steal time MSR vmsd callback to proper opaque type      Convert steal time MSR vmsd callback pointer to proper X86CPU type.      Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>  commit 917367aa968fd4fef29d340e0c7ec8c608dffaab  Author: Marcelo Tosatti <mtosatti@redhat.com>  Date: Tue Feb 19 23:27:20 2013 -0300      target-i386: kvm: save/restore steal time MSR      Read and write steal time MSR, so that reporting is functional across      migration.      Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>      Signed-off-by: Gleb Natapov <gleb@redhat.com> and the steal time jump on migration went away. However, steal time was not reported at all after migration, which is expected after reverting 917367aa. So it seems that after 917367aa, the steal time MSR is correctly saved and copied to the receiving side, but then it is restored by the main thread (probably during cpu_synchronize_all_post_init()), causing the overflow when the vCPU threads are unpaused. Testing Steps with patched trusty kernel: Using savemv & loadvm to simulate the migration process. In guest: 1. check steal_time data location rdmsr 0x4b564d03 <----- returns the start address 0x7fc0d000 2. exam the steal time before savevm in qemu monitor (qemu) xp /ug 0x7fc0d000 xp /ug 0x7fc0d000 000000007fc0d000: 144139048 <---- steal time value before savevm (qemu) savevm mytestvm7 savevm mytestvm7 (qemu) quit quit 3. give some load to the host system, e.g. stress --cpu <XXX> 4. start the guest with "-loadvm mytestvm7" to restore the state of the VM, thus the steal_time MSR 5. exam the steal time value again (qemu) xp /ug 0x7fc0d000 xp /ug 0x7fc0d000 000000007fc0d000: 147428917 <---- with high cpu load after loadvm, steal time still shows a linear increase The steal_time value would go backward (because of the overflow) after the restoration of the VM state without the fix. ----------------------------------------------------------------------------- I'm pasting in text from Debian Bug 785557 https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=785557 b/c I couldn't find this issue reported. It is present in QEMU 2.3, but I haven't tested later versions. Perhaps someone else will find this bug and confirm for later versions. (Or I will when I have time!) -------------------------------------------------------------------------------------------- Hi, I'm trying to debug an issue we're having with some debian.org machines running in QEMU 2.1.2 instances (see [1] for more background). In short, after a live migration guests running Debian Jessie (linux 3.16) stop accounting CPU time properly. /proc/stat in the guest shows no increase in user and system time anymore (regardless of workload) and what stands out are extremely large values for steal time:  % cat /proc/stat  cpu 2400 0 1842 650879168 2579640 0 25 136562317270 0 0  cpu0 1366 0 1028 161392988 1238598 0 11 383803090749 0 0  cpu1 294 0 240 162582008 639105 0 8 39686436048 0 0  cpu2 406 0 338 163331066 383867 0 4 333994238765 0 0  cpu3 332 0 235 163573105 318069 0 1 1223752959076 0 0  intr 355773871 33 10 0 0 0 0 3 0 1 0 0 36 144 0 0 1638612 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 5001741 41 0 8516993 0 3669582 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0  ctxt 837862829  btime 1431642967  processes 8529939  procs_running 1  procs_blocked 0  softirq 225193331 2 77532878 172 7250024 819289 0 54 33739135 176552 105675225 Reading the memory pointed to by the steal time MSRs pre- and post-migration, I can see that post-migration the high bytes are set to 0xff: (qemu) xp /8b 0x1fc0cfc0 000000001fc0cfc0: 0x94 0x57 0x77 0xf5 0xff 0xff 0xff 0xff The "jump" in steal time happens when the guest is resumed on the receiving side. I've also been able to consistently reproduce this on a Ganeti cluster at work, using QEMU 2.1.3 and kernels 3.16 and 4.0 in the guests. The issue goes away if I disable the steal time MSR using `-cpu qemu64,-kvm_steal_time`. So, it looks to me as if the steal time MSR is not set/copied properly during live migration, although AFAICT this should be the case after 917367aa968fd4fef29d340e0c7ec8c608dffaab. After investigating a bit more, it looks like the issue comes from an overflow in the kernel's accumulate_steal_time() (arch/x86/kvm/x86.c:2023):   static void accumulate_steal_time(struct kvm_vcpu *vcpu)   {           u64 delta;           if (!(vcpu->arch.st.msr_val & KVM_MSR_ENABLED))                   return;           delta = current->sched_info.run_delay - vcpu->arch.st.last_steal; Using systemtap with the attached script to trace KVM execution on the receiving host kernel, we can see that shortly before marking the vCPUs as runnable on a migrated KVM instance with 2 vCPUs, the following happens (** marks lines of interest):  ** 0 qemu-system-x86(18446): kvm_arch_vcpu_load: run_delay=7856949 ns steal=7856949 ns      0 qemu-system-x86(18446): -> kvm_arch_vcpu_load      0 vhost-18446(18447): -> kvm_arch_vcpu_should_kick      5 vhost-18446(18447): <- kvm_arch_vcpu_should_kick     23 qemu-system-x86(18446): <- kvm_arch_vcpu_load      0 qemu-system-x86(18446): -> kvm_arch_vcpu_ioctl      2 qemu-system-x86(18446): <- kvm_arch_vcpu_ioctl      0 qemu-system-x86(18446): -> kvm_arch_vcpu_put      2 qemu-system-x86(18446): -> kvm_put_guest_fpu      3 qemu-system-x86(18446): <- kvm_put_guest_fpu      4 qemu-system-x86(18446): <- kvm_arch_vcpu_put  ** 0 qemu-system-x86(18446): kvm_arch_vcpu_load: run_delay=7856949 ns steal=7856949 ns      0 qemu-system-x86(18446): -> kvm_arch_vcpu_load      1 qemu-system-x86(18446): <- kvm_arch_vcpu_load      0 qemu-system-x86(18446): -> kvm_arch_vcpu_ioctl      1 qemu-system-x86(18446): <- kvm_arch_vcpu_ioctl      0 qemu-system-x86(18446): -> kvm_arch_vcpu_put      1 qemu-system-x86(18446): -> kvm_put_guest_fpu      2 qemu-system-x86(18446): <- kvm_put_guest_fpu      3 qemu-system-x86(18446): <- kvm_arch_vcpu_put  ** 0 qemu-system-x86(18449): kvm_arch_vcpu_load: run_delay=40304 ns steal=7856949 ns      0 qemu-system-x86(18449): -> kvm_arch_vcpu_load  ** 7 qemu-system-x86(18449): delta: 18446744073701734971 ns, steal=7856949 ns, run_delay=40304 ns     10 qemu-system-x86(18449): <- kvm_arch_vcpu_load  ** 0 qemu-system-x86(18449): -> kvm_arch_vcpu_ioctl_run      4 qemu-system-x86(18449): -> kvm_arch_vcpu_runnable      6 qemu-system-x86(18449): <- kvm_arch_vcpu_runnable      ...      0 qemu-system-x86(18448): kvm_arch_vcpu_load: run_delay=0 ns steal=7856949 ns      0 qemu-system-x86(18448): -> kvm_arch_vcpu_load  ** 34 qemu-system-x86(18448): delta: 18446744073701694667 ns, steal=7856949 ns, run_delay=0 ns     40 qemu-system-x86(18448): <- kvm_arch_vcpu_load  ** 0 qemu-system-x86(18448): -> kvm_arch_vcpu_ioctl_run      5 qemu-system-x86(18448): -> kvm_arch_vcpu_runnable Now, what's really interesting is that current->sched_info.run_delay gets reset because the tasks (threads) using the vCPUs change, and thus have a different current->sched_info: it looks like task 18446 created the two vCPUs, and then they were handed over to 18448 and 18449 respectively. This is also verified by the fact that during the overflow, both vCPUs have the old steal time of the last vcpu_load of task 18446. However, according to Documentation/virtual/kvm/api.txt:  - vcpu ioctls: These query and set attributes that control the operation    of a single virtual cpu.    Only run vcpu ioctls from the same thread that was used to create the vcpu. So it seems qemu is doing something that it shouldn't: calling vCPU ioctls from a thread that didn't create the vCPU. Note that this probably happens on every QEMU startup, but is not visible because the guest kernel zeroes out the steal time on boot. There are at least two ways to mitigate the issue without a kernel recompilation:  - The first one is to disable the steal time propagation from host to    guest by invoking qemu with `-cpu qemu64,-kvm_steal_time`. This will    short-circuit accumulate_steal_time() due to (vcpu->arch.st.msr_val &    KVM_MSR_ENABLED) and will completely disable steal time reporting in    the guest, which may not be desired if people rely on it to detect    CPU congestion.  - The other one is using the following systemtap script to prevent the    steal time counter from overflowing by dropping the problematic    samples (WARNING: systemtap guru mode required, use at your own    risk):       probe module("kvm").statement("*@arch/x86/kvm/x86.c:2024") {         if (@defined($delta) && $delta < 0) {           printk(4, "kvm: steal time delta < 0, dropping")           $delta = 0         }       } Note that not all *guests* handle this condition in the same way: 3.2 guests still get the overflow in /proc/stat, but their scheduler continues to work as expected. 3.16 guests OTOH go nuts once steal time overflows and stop accumulating system & user time, while entering an erratic state where steal time in /proc/stat is *decreasing* on every clock tick. -------------------------------------------- Revised statement: > Now, what's really interesting is that current->sched_info.run_delay > gets reset because the tasks (threads) using the vCPUs change, and > thus have a different current->sched_info: it looks like task 18446 > created the two vCPUs, and then they were handed over to 18448 and > 18449 respectively. This is also verified by the fact that during the > overflow, both vCPUs have the old steal time of the last vcpu_load of > task 18446. However, according to Documentation/virtual/kvm/api.txt: The above is not entirely accurate: the vCPUs were created by the threads that are used to run them (18448 and 18449 respectively), it's just that the main thread is issuing ioctls during initialization, as illustrated by the strace output on a different process:  [ vCPU #0 thread creating vCPU #0 (fd 20) ]  [pid 1861] ioctl(14, KVM_CREATE_VCPU, 0) = 20  [pid 1861] ioctl(20, KVM_X86_SETUP_MCE, 0x7fbd3ca40cd8) = 0  [pid 1861] ioctl(20, KVM_SET_CPUID2, 0x7fbd3ca40ce0) = 0  [pid 1861] ioctl(20, KVM_SET_SIGNAL_MASK, 0x7fbd380008f0) = 0  [ vCPU #1 thread creating vCPU #1 (fd 21) ]  [pid 1862] ioctl(14, KVM_CREATE_VCPU, 0x1) = 21  [pid 1862] ioctl(21, KVM_X86_SETUP_MCE, 0x7fbd37ffdcd8) = 0  [pid 1862] ioctl(21, KVM_SET_CPUID2, 0x7fbd37ffdce0) = 0  [pid 1862] ioctl(21, KVM_SET_SIGNAL_MASK, 0x7fbd300008f0) = 0  [ Main thread calling kvm_arch_put_registers() on vCPU #0 ]  [pid 1859] ioctl(20, KVM_SET_REGS, 0x7ffc98aac230) = 0  [pid 1859] ioctl(20, KVM_SET_XSAVE or KVM_SIGNAL_MSI, 0x7fbd38001000) = 0  [pid 1859] ioctl(20, KVM_PPC_ALLOCATE_HTAB or KVM_SET_XCRS, 0x7ffc98aac010) = 0  [pid 1859] ioctl(20, KVM_SET_SREGS, 0x7ffc98aac050) = 0  [pid 1859] ioctl(20, KVM_SET_MSRS, 0x7ffc98aab820) = 87  [pid 1859] ioctl(20, KVM_SET_MP_STATE, 0x7ffc98aac230) = 0  [pid 1859] ioctl(20, KVM_SET_LAPIC, 0x7ffc98aabd80) = 0  [pid 1859] ioctl(20, KVM_SET_MSRS, 0x7ffc98aac1b0) = 1  [pid 1859] ioctl(20, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS, 0x7ffc98aac1b0) = 0  [pid 1859] ioctl(20, KVM_SET_DEBUGREGS or KVM_SET_TSC_KHZ, 0x7ffc98aac1b0) = 0  [ Main thread calling kvm_arch_put_registers() on vCPU #1 ]  [pid 1859] ioctl(21, KVM_SET_REGS, 0x7ffc98aac230) = 0  [pid 1859] ioctl(21, KVM_SET_XSAVE or KVM_SIGNAL_MSI, 0x7fbd30001000) = 0  [pid 1859] ioctl(21, KVM_PPC_ALLOCATE_HTAB or KVM_SET_XCRS, 0x7ffc98aac010) = 0  [pid 1859] ioctl(21, KVM_SET_SREGS, 0x7ffc98aac050) = 0  [pid 1859] ioctl(21, KVM_SET_MSRS, 0x7ffc98aab820) = 87  [pid 1859] ioctl(21, KVM_SET_MP_STATE, 0x7ffc98aac230) = 0  [pid 1859] ioctl(21, KVM_SET_LAPIC, 0x7ffc98aabd80) = 0  [pid 1859] ioctl(21, KVM_SET_MSRS, 0x7ffc98aac1b0) = 1  [pid 1859] ioctl(21, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS, 0x7ffc98aac1b0) = 0  [pid 1859] ioctl(21, KVM_SET_DEBUGREGS or KVM_SET_TSC_KHZ, 0x7ffc98aac1b0) = 0 Using systemtap again, I noticed that the main thread's run_delay is copied to last_steal only after a KVM_SET_MSRS ioctl which enables the steal time MSR is issued by the main thread (see linux 3.16.7-ckt11-1/arch/x86/kvm/x86.c:2162). Taking an educated guess, I reverted the following qemu commits:  commit 0e5035776df31380a44a1a851850d110b551ecb6  Author: Marcelo Tosatti <mtosatti@redhat.com>  Date: Tue Sep 3 18:55:16 2013 -0300      fix steal time MSR vmsd callback to proper opaque type      Convert steal time MSR vmsd callback pointer to proper X86CPU type.      Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>  commit 917367aa968fd4fef29d340e0c7ec8c608dffaab  Author: Marcelo Tosatti <mtosatti@redhat.com>  Date: Tue Feb 19 23:27:20 2013 -0300      target-i386: kvm: save/restore steal time MSR      Read and write steal time MSR, so that reporting is functional across      migration.      Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>      Signed-off-by: Gleb Natapov <gleb@redhat.com> and the steal time jump on migration went away. However, steal time was not reported at all after migration, which is expected after reverting 917367aa. So it seems that after 917367aa, the steal time MSR is correctly saved and copied to the receiving side, but then it is restored by the main thread (probably during cpu_synchronize_all_post_init()), causing the overflow when the vCPU threads are unpaused.
2016-03-09 08:56:04 Liang Chen description Testing Steps with patched trusty kernel: Using savemv & loadvm to simulate the migration process. In guest: 1. check steal_time data location rdmsr 0x4b564d03 <----- returns the start address 0x7fc0d000 2. exam the steal time before savevm in qemu monitor (qemu) xp /ug 0x7fc0d000 xp /ug 0x7fc0d000 000000007fc0d000: 144139048 <---- steal time value before savevm (qemu) savevm mytestvm7 savevm mytestvm7 (qemu) quit quit 3. give some load to the host system, e.g. stress --cpu <XXX> 4. start the guest with "-loadvm mytestvm7" to restore the state of the VM, thus the steal_time MSR 5. exam the steal time value again (qemu) xp /ug 0x7fc0d000 xp /ug 0x7fc0d000 000000007fc0d000: 147428917 <---- with high cpu load after loadvm, steal time still shows a linear increase The steal_time value would go backward (because of the overflow) after the restoration of the VM state without the fix. ----------------------------------------------------------------------------- I'm pasting in text from Debian Bug 785557 https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=785557 b/c I couldn't find this issue reported. It is present in QEMU 2.3, but I haven't tested later versions. Perhaps someone else will find this bug and confirm for later versions. (Or I will when I have time!) -------------------------------------------------------------------------------------------- Hi, I'm trying to debug an issue we're having with some debian.org machines running in QEMU 2.1.2 instances (see [1] for more background). In short, after a live migration guests running Debian Jessie (linux 3.16) stop accounting CPU time properly. /proc/stat in the guest shows no increase in user and system time anymore (regardless of workload) and what stands out are extremely large values for steal time:  % cat /proc/stat  cpu 2400 0 1842 650879168 2579640 0 25 136562317270 0 0  cpu0 1366 0 1028 161392988 1238598 0 11 383803090749 0 0  cpu1 294 0 240 162582008 639105 0 8 39686436048 0 0  cpu2 406 0 338 163331066 383867 0 4 333994238765 0 0  cpu3 332 0 235 163573105 318069 0 1 1223752959076 0 0  intr 355773871 33 10 0 0 0 0 3 0 1 0 0 36 144 0 0 1638612 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 5001741 41 0 8516993 0 3669582 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0  ctxt 837862829  btime 1431642967  processes 8529939  procs_running 1  procs_blocked 0  softirq 225193331 2 77532878 172 7250024 819289 0 54 33739135 176552 105675225 Reading the memory pointed to by the steal time MSRs pre- and post-migration, I can see that post-migration the high bytes are set to 0xff: (qemu) xp /8b 0x1fc0cfc0 000000001fc0cfc0: 0x94 0x57 0x77 0xf5 0xff 0xff 0xff 0xff The "jump" in steal time happens when the guest is resumed on the receiving side. I've also been able to consistently reproduce this on a Ganeti cluster at work, using QEMU 2.1.3 and kernels 3.16 and 4.0 in the guests. The issue goes away if I disable the steal time MSR using `-cpu qemu64,-kvm_steal_time`. So, it looks to me as if the steal time MSR is not set/copied properly during live migration, although AFAICT this should be the case after 917367aa968fd4fef29d340e0c7ec8c608dffaab. After investigating a bit more, it looks like the issue comes from an overflow in the kernel's accumulate_steal_time() (arch/x86/kvm/x86.c:2023):   static void accumulate_steal_time(struct kvm_vcpu *vcpu)   {           u64 delta;           if (!(vcpu->arch.st.msr_val & KVM_MSR_ENABLED))                   return;           delta = current->sched_info.run_delay - vcpu->arch.st.last_steal; Using systemtap with the attached script to trace KVM execution on the receiving host kernel, we can see that shortly before marking the vCPUs as runnable on a migrated KVM instance with 2 vCPUs, the following happens (** marks lines of interest):  ** 0 qemu-system-x86(18446): kvm_arch_vcpu_load: run_delay=7856949 ns steal=7856949 ns      0 qemu-system-x86(18446): -> kvm_arch_vcpu_load      0 vhost-18446(18447): -> kvm_arch_vcpu_should_kick      5 vhost-18446(18447): <- kvm_arch_vcpu_should_kick     23 qemu-system-x86(18446): <- kvm_arch_vcpu_load      0 qemu-system-x86(18446): -> kvm_arch_vcpu_ioctl      2 qemu-system-x86(18446): <- kvm_arch_vcpu_ioctl      0 qemu-system-x86(18446): -> kvm_arch_vcpu_put      2 qemu-system-x86(18446): -> kvm_put_guest_fpu      3 qemu-system-x86(18446): <- kvm_put_guest_fpu      4 qemu-system-x86(18446): <- kvm_arch_vcpu_put  ** 0 qemu-system-x86(18446): kvm_arch_vcpu_load: run_delay=7856949 ns steal=7856949 ns      0 qemu-system-x86(18446): -> kvm_arch_vcpu_load      1 qemu-system-x86(18446): <- kvm_arch_vcpu_load      0 qemu-system-x86(18446): -> kvm_arch_vcpu_ioctl      1 qemu-system-x86(18446): <- kvm_arch_vcpu_ioctl      0 qemu-system-x86(18446): -> kvm_arch_vcpu_put      1 qemu-system-x86(18446): -> kvm_put_guest_fpu      2 qemu-system-x86(18446): <- kvm_put_guest_fpu      3 qemu-system-x86(18446): <- kvm_arch_vcpu_put  ** 0 qemu-system-x86(18449): kvm_arch_vcpu_load: run_delay=40304 ns steal=7856949 ns      0 qemu-system-x86(18449): -> kvm_arch_vcpu_load  ** 7 qemu-system-x86(18449): delta: 18446744073701734971 ns, steal=7856949 ns, run_delay=40304 ns     10 qemu-system-x86(18449): <- kvm_arch_vcpu_load  ** 0 qemu-system-x86(18449): -> kvm_arch_vcpu_ioctl_run      4 qemu-system-x86(18449): -> kvm_arch_vcpu_runnable      6 qemu-system-x86(18449): <- kvm_arch_vcpu_runnable      ...      0 qemu-system-x86(18448): kvm_arch_vcpu_load: run_delay=0 ns steal=7856949 ns      0 qemu-system-x86(18448): -> kvm_arch_vcpu_load  ** 34 qemu-system-x86(18448): delta: 18446744073701694667 ns, steal=7856949 ns, run_delay=0 ns     40 qemu-system-x86(18448): <- kvm_arch_vcpu_load  ** 0 qemu-system-x86(18448): -> kvm_arch_vcpu_ioctl_run      5 qemu-system-x86(18448): -> kvm_arch_vcpu_runnable Now, what's really interesting is that current->sched_info.run_delay gets reset because the tasks (threads) using the vCPUs change, and thus have a different current->sched_info: it looks like task 18446 created the two vCPUs, and then they were handed over to 18448 and 18449 respectively. This is also verified by the fact that during the overflow, both vCPUs have the old steal time of the last vcpu_load of task 18446. However, according to Documentation/virtual/kvm/api.txt:  - vcpu ioctls: These query and set attributes that control the operation    of a single virtual cpu.    Only run vcpu ioctls from the same thread that was used to create the vcpu. So it seems qemu is doing something that it shouldn't: calling vCPU ioctls from a thread that didn't create the vCPU. Note that this probably happens on every QEMU startup, but is not visible because the guest kernel zeroes out the steal time on boot. There are at least two ways to mitigate the issue without a kernel recompilation:  - The first one is to disable the steal time propagation from host to    guest by invoking qemu with `-cpu qemu64,-kvm_steal_time`. This will    short-circuit accumulate_steal_time() due to (vcpu->arch.st.msr_val &    KVM_MSR_ENABLED) and will completely disable steal time reporting in    the guest, which may not be desired if people rely on it to detect    CPU congestion.  - The other one is using the following systemtap script to prevent the    steal time counter from overflowing by dropping the problematic    samples (WARNING: systemtap guru mode required, use at your own    risk):       probe module("kvm").statement("*@arch/x86/kvm/x86.c:2024") {         if (@defined($delta) && $delta < 0) {           printk(4, "kvm: steal time delta < 0, dropping")           $delta = 0         }       } Note that not all *guests* handle this condition in the same way: 3.2 guests still get the overflow in /proc/stat, but their scheduler continues to work as expected. 3.16 guests OTOH go nuts once steal time overflows and stop accumulating system & user time, while entering an erratic state where steal time in /proc/stat is *decreasing* on every clock tick. -------------------------------------------- Revised statement: > Now, what's really interesting is that current->sched_info.run_delay > gets reset because the tasks (threads) using the vCPUs change, and > thus have a different current->sched_info: it looks like task 18446 > created the two vCPUs, and then they were handed over to 18448 and > 18449 respectively. This is also verified by the fact that during the > overflow, both vCPUs have the old steal time of the last vcpu_load of > task 18446. However, according to Documentation/virtual/kvm/api.txt: The above is not entirely accurate: the vCPUs were created by the threads that are used to run them (18448 and 18449 respectively), it's just that the main thread is issuing ioctls during initialization, as illustrated by the strace output on a different process:  [ vCPU #0 thread creating vCPU #0 (fd 20) ]  [pid 1861] ioctl(14, KVM_CREATE_VCPU, 0) = 20  [pid 1861] ioctl(20, KVM_X86_SETUP_MCE, 0x7fbd3ca40cd8) = 0  [pid 1861] ioctl(20, KVM_SET_CPUID2, 0x7fbd3ca40ce0) = 0  [pid 1861] ioctl(20, KVM_SET_SIGNAL_MASK, 0x7fbd380008f0) = 0  [ vCPU #1 thread creating vCPU #1 (fd 21) ]  [pid 1862] ioctl(14, KVM_CREATE_VCPU, 0x1) = 21  [pid 1862] ioctl(21, KVM_X86_SETUP_MCE, 0x7fbd37ffdcd8) = 0  [pid 1862] ioctl(21, KVM_SET_CPUID2, 0x7fbd37ffdce0) = 0  [pid 1862] ioctl(21, KVM_SET_SIGNAL_MASK, 0x7fbd300008f0) = 0  [ Main thread calling kvm_arch_put_registers() on vCPU #0 ]  [pid 1859] ioctl(20, KVM_SET_REGS, 0x7ffc98aac230) = 0  [pid 1859] ioctl(20, KVM_SET_XSAVE or KVM_SIGNAL_MSI, 0x7fbd38001000) = 0  [pid 1859] ioctl(20, KVM_PPC_ALLOCATE_HTAB or KVM_SET_XCRS, 0x7ffc98aac010) = 0  [pid 1859] ioctl(20, KVM_SET_SREGS, 0x7ffc98aac050) = 0  [pid 1859] ioctl(20, KVM_SET_MSRS, 0x7ffc98aab820) = 87  [pid 1859] ioctl(20, KVM_SET_MP_STATE, 0x7ffc98aac230) = 0  [pid 1859] ioctl(20, KVM_SET_LAPIC, 0x7ffc98aabd80) = 0  [pid 1859] ioctl(20, KVM_SET_MSRS, 0x7ffc98aac1b0) = 1  [pid 1859] ioctl(20, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS, 0x7ffc98aac1b0) = 0  [pid 1859] ioctl(20, KVM_SET_DEBUGREGS or KVM_SET_TSC_KHZ, 0x7ffc98aac1b0) = 0  [ Main thread calling kvm_arch_put_registers() on vCPU #1 ]  [pid 1859] ioctl(21, KVM_SET_REGS, 0x7ffc98aac230) = 0  [pid 1859] ioctl(21, KVM_SET_XSAVE or KVM_SIGNAL_MSI, 0x7fbd30001000) = 0  [pid 1859] ioctl(21, KVM_PPC_ALLOCATE_HTAB or KVM_SET_XCRS, 0x7ffc98aac010) = 0  [pid 1859] ioctl(21, KVM_SET_SREGS, 0x7ffc98aac050) = 0  [pid 1859] ioctl(21, KVM_SET_MSRS, 0x7ffc98aab820) = 87  [pid 1859] ioctl(21, KVM_SET_MP_STATE, 0x7ffc98aac230) = 0  [pid 1859] ioctl(21, KVM_SET_LAPIC, 0x7ffc98aabd80) = 0  [pid 1859] ioctl(21, KVM_SET_MSRS, 0x7ffc98aac1b0) = 1  [pid 1859] ioctl(21, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS, 0x7ffc98aac1b0) = 0  [pid 1859] ioctl(21, KVM_SET_DEBUGREGS or KVM_SET_TSC_KHZ, 0x7ffc98aac1b0) = 0 Using systemtap again, I noticed that the main thread's run_delay is copied to last_steal only after a KVM_SET_MSRS ioctl which enables the steal time MSR is issued by the main thread (see linux 3.16.7-ckt11-1/arch/x86/kvm/x86.c:2162). Taking an educated guess, I reverted the following qemu commits:  commit 0e5035776df31380a44a1a851850d110b551ecb6  Author: Marcelo Tosatti <mtosatti@redhat.com>  Date: Tue Sep 3 18:55:16 2013 -0300      fix steal time MSR vmsd callback to proper opaque type      Convert steal time MSR vmsd callback pointer to proper X86CPU type.      Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>  commit 917367aa968fd4fef29d340e0c7ec8c608dffaab  Author: Marcelo Tosatti <mtosatti@redhat.com>  Date: Tue Feb 19 23:27:20 2013 -0300      target-i386: kvm: save/restore steal time MSR      Read and write steal time MSR, so that reporting is functional across      migration.      Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>      Signed-off-by: Gleb Natapov <gleb@redhat.com> and the steal time jump on migration went away. However, steal time was not reported at all after migration, which is expected after reverting 917367aa. So it seems that after 917367aa, the steal time MSR is correctly saved and copied to the receiving side, but then it is restored by the main thread (probably during cpu_synchronize_all_post_init()), causing the overflow when the vCPU threads are unpaused. [Impact] It is possible to have vcpu->arch.st.last_steal initialized from a thread other than vcpu thread, say the main thread, via KVM_SET_MSRS. That can cause steal_time overflow later (when subtracting from vcpu threads sched_info.run_delay). [Test Case] Testing Steps with patched trusty kernel: Using savemv & loadvm to simulate the migration process. In guest: 1. check steal_time data location rdmsr 0x4b564d03 <----- returns the start address 0x7fc0d000 2. exam the steal time before savevm in qemu monitor (qemu) xp /ug 0x7fc0d000 xp /ug 0x7fc0d000 000000007fc0d000: 144139048 <---- steal time value before savevm (qemu) savevm mytestvm7 savevm mytestvm7 (qemu) quit quit 3. give some load to the host system, e.g. stress --cpu <XXX> 4. start the guest with "-loadvm mytestvm7" to restore the state of the VM, thus the steal_time MSR 5. exam the steal time value again (qemu) xp /ug 0x7fc0d000 xp /ug 0x7fc0d000 000000007fc0d000: 147428917 <---- with high cpu load after loadvm, steal time still shows a linear increase The steal_time value would go backward (because of the overflow) after the restoration of the VM state without the fix. ----------------------------------------------------------------------------- I'm pasting in text from Debian Bug 785557 https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=785557 b/c I couldn't find this issue reported. It is present in QEMU 2.3, but I haven't tested later versions. Perhaps someone else will find this bug and confirm for later versions. (Or I will when I have time!) -------------------------------------------------------------------------------------------- Hi, I'm trying to debug an issue we're having with some debian.org machines running in QEMU 2.1.2 instances (see [1] for more background). In short, after a live migration guests running Debian Jessie (linux 3.16) stop accounting CPU time properly. /proc/stat in the guest shows no increase in user and system time anymore (regardless of workload) and what stands out are extremely large values for steal time:  % cat /proc/stat  cpu 2400 0 1842 650879168 2579640 0 25 136562317270 0 0  cpu0 1366 0 1028 161392988 1238598 0 11 383803090749 0 0  cpu1 294 0 240 162582008 639105 0 8 39686436048 0 0  cpu2 406 0 338 163331066 383867 0 4 333994238765 0 0  cpu3 332 0 235 163573105 318069 0 1 1223752959076 0 0  intr 355773871 33 10 0 0 0 0 3 0 1 0 0 36 144 0 0 1638612 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 5001741 41 0 8516993 0 3669582 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0  ctxt 837862829  btime 1431642967  processes 8529939  procs_running 1  procs_blocked 0  softirq 225193331 2 77532878 172 7250024 819289 0 54 33739135 176552 105675225 Reading the memory pointed to by the steal time MSRs pre- and post-migration, I can see that post-migration the high bytes are set to 0xff: (qemu) xp /8b 0x1fc0cfc0 000000001fc0cfc0: 0x94 0x57 0x77 0xf5 0xff 0xff 0xff 0xff The "jump" in steal time happens when the guest is resumed on the receiving side. I've also been able to consistently reproduce this on a Ganeti cluster at work, using QEMU 2.1.3 and kernels 3.16 and 4.0 in the guests. The issue goes away if I disable the steal time MSR using `-cpu qemu64,-kvm_steal_time`. So, it looks to me as if the steal time MSR is not set/copied properly during live migration, although AFAICT this should be the case after 917367aa968fd4fef29d340e0c7ec8c608dffaab. After investigating a bit more, it looks like the issue comes from an overflow in the kernel's accumulate_steal_time() (arch/x86/kvm/x86.c:2023):   static void accumulate_steal_time(struct kvm_vcpu *vcpu)   {           u64 delta;           if (!(vcpu->arch.st.msr_val & KVM_MSR_ENABLED))                   return;           delta = current->sched_info.run_delay - vcpu->arch.st.last_steal; Using systemtap with the attached script to trace KVM execution on the receiving host kernel, we can see that shortly before marking the vCPUs as runnable on a migrated KVM instance with 2 vCPUs, the following happens (** marks lines of interest):  ** 0 qemu-system-x86(18446): kvm_arch_vcpu_load: run_delay=7856949 ns steal=7856949 ns      0 qemu-system-x86(18446): -> kvm_arch_vcpu_load      0 vhost-18446(18447): -> kvm_arch_vcpu_should_kick      5 vhost-18446(18447): <- kvm_arch_vcpu_should_kick     23 qemu-system-x86(18446): <- kvm_arch_vcpu_load      0 qemu-system-x86(18446): -> kvm_arch_vcpu_ioctl      2 qemu-system-x86(18446): <- kvm_arch_vcpu_ioctl      0 qemu-system-x86(18446): -> kvm_arch_vcpu_put      2 qemu-system-x86(18446): -> kvm_put_guest_fpu      3 qemu-system-x86(18446): <- kvm_put_guest_fpu      4 qemu-system-x86(18446): <- kvm_arch_vcpu_put  ** 0 qemu-system-x86(18446): kvm_arch_vcpu_load: run_delay=7856949 ns steal=7856949 ns      0 qemu-system-x86(18446): -> kvm_arch_vcpu_load      1 qemu-system-x86(18446): <- kvm_arch_vcpu_load      0 qemu-system-x86(18446): -> kvm_arch_vcpu_ioctl      1 qemu-system-x86(18446): <- kvm_arch_vcpu_ioctl      0 qemu-system-x86(18446): -> kvm_arch_vcpu_put      1 qemu-system-x86(18446): -> kvm_put_guest_fpu      2 qemu-system-x86(18446): <- kvm_put_guest_fpu      3 qemu-system-x86(18446): <- kvm_arch_vcpu_put  ** 0 qemu-system-x86(18449): kvm_arch_vcpu_load: run_delay=40304 ns steal=7856949 ns      0 qemu-system-x86(18449): -> kvm_arch_vcpu_load  ** 7 qemu-system-x86(18449): delta: 18446744073701734971 ns, steal=7856949 ns, run_delay=40304 ns     10 qemu-system-x86(18449): <- kvm_arch_vcpu_load  ** 0 qemu-system-x86(18449): -> kvm_arch_vcpu_ioctl_run      4 qemu-system-x86(18449): -> kvm_arch_vcpu_runnable      6 qemu-system-x86(18449): <- kvm_arch_vcpu_runnable      ...      0 qemu-system-x86(18448): kvm_arch_vcpu_load: run_delay=0 ns steal=7856949 ns      0 qemu-system-x86(18448): -> kvm_arch_vcpu_load  ** 34 qemu-system-x86(18448): delta: 18446744073701694667 ns, steal=7856949 ns, run_delay=0 ns     40 qemu-system-x86(18448): <- kvm_arch_vcpu_load  ** 0 qemu-system-x86(18448): -> kvm_arch_vcpu_ioctl_run      5 qemu-system-x86(18448): -> kvm_arch_vcpu_runnable Now, what's really interesting is that current->sched_info.run_delay gets reset because the tasks (threads) using the vCPUs change, and thus have a different current->sched_info: it looks like task 18446 created the two vCPUs, and then they were handed over to 18448 and 18449 respectively. This is also verified by the fact that during the overflow, both vCPUs have the old steal time of the last vcpu_load of task 18446. However, according to Documentation/virtual/kvm/api.txt:  - vcpu ioctls: These query and set attributes that control the operation    of a single virtual cpu.    Only run vcpu ioctls from the same thread that was used to create the vcpu. So it seems qemu is doing something that it shouldn't: calling vCPU ioctls from a thread that didn't create the vCPU. Note that this probably happens on every QEMU startup, but is not visible because the guest kernel zeroes out the steal time on boot. There are at least two ways to mitigate the issue without a kernel recompilation:  - The first one is to disable the steal time propagation from host to    guest by invoking qemu with `-cpu qemu64,-kvm_steal_time`. This will    short-circuit accumulate_steal_time() due to (vcpu->arch.st.msr_val &    KVM_MSR_ENABLED) and will completely disable steal time reporting in    the guest, which may not be desired if people rely on it to detect    CPU congestion.  - The other one is using the following systemtap script to prevent the    steal time counter from overflowing by dropping the problematic    samples (WARNING: systemtap guru mode required, use at your own    risk):       probe module("kvm").statement("*@arch/x86/kvm/x86.c:2024") {         if (@defined($delta) && $delta < 0) {           printk(4, "kvm: steal time delta < 0, dropping")           $delta = 0         }       } Note that not all *guests* handle this condition in the same way: 3.2 guests still get the overflow in /proc/stat, but their scheduler continues to work as expected. 3.16 guests OTOH go nuts once steal time overflows and stop accumulating system & user time, while entering an erratic state where steal time in /proc/stat is *decreasing* on every clock tick. -------------------------------------------- Revised statement: > Now, what's really interesting is that current->sched_info.run_delay > gets reset because the tasks (threads) using the vCPUs change, and > thus have a different current->sched_info: it looks like task 18446 > created the two vCPUs, and then they were handed over to 18448 and > 18449 respectively. This is also verified by the fact that during the > overflow, both vCPUs have the old steal time of the last vcpu_load of > task 18446. However, according to Documentation/virtual/kvm/api.txt: The above is not entirely accurate: the vCPUs were created by the threads that are used to run them (18448 and 18449 respectively), it's just that the main thread is issuing ioctls during initialization, as illustrated by the strace output on a different process:  [ vCPU #0 thread creating vCPU #0 (fd 20) ]  [pid 1861] ioctl(14, KVM_CREATE_VCPU, 0) = 20  [pid 1861] ioctl(20, KVM_X86_SETUP_MCE, 0x7fbd3ca40cd8) = 0  [pid 1861] ioctl(20, KVM_SET_CPUID2, 0x7fbd3ca40ce0) = 0  [pid 1861] ioctl(20, KVM_SET_SIGNAL_MASK, 0x7fbd380008f0) = 0  [ vCPU #1 thread creating vCPU #1 (fd 21) ]  [pid 1862] ioctl(14, KVM_CREATE_VCPU, 0x1) = 21  [pid 1862] ioctl(21, KVM_X86_SETUP_MCE, 0x7fbd37ffdcd8) = 0  [pid 1862] ioctl(21, KVM_SET_CPUID2, 0x7fbd37ffdce0) = 0  [pid 1862] ioctl(21, KVM_SET_SIGNAL_MASK, 0x7fbd300008f0) = 0  [ Main thread calling kvm_arch_put_registers() on vCPU #0 ]  [pid 1859] ioctl(20, KVM_SET_REGS, 0x7ffc98aac230) = 0  [pid 1859] ioctl(20, KVM_SET_XSAVE or KVM_SIGNAL_MSI, 0x7fbd38001000) = 0  [pid 1859] ioctl(20, KVM_PPC_ALLOCATE_HTAB or KVM_SET_XCRS, 0x7ffc98aac010) = 0  [pid 1859] ioctl(20, KVM_SET_SREGS, 0x7ffc98aac050) = 0  [pid 1859] ioctl(20, KVM_SET_MSRS, 0x7ffc98aab820) = 87  [pid 1859] ioctl(20, KVM_SET_MP_STATE, 0x7ffc98aac230) = 0  [pid 1859] ioctl(20, KVM_SET_LAPIC, 0x7ffc98aabd80) = 0  [pid 1859] ioctl(20, KVM_SET_MSRS, 0x7ffc98aac1b0) = 1  [pid 1859] ioctl(20, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS, 0x7ffc98aac1b0) = 0  [pid 1859] ioctl(20, KVM_SET_DEBUGREGS or KVM_SET_TSC_KHZ, 0x7ffc98aac1b0) = 0  [ Main thread calling kvm_arch_put_registers() on vCPU #1 ]  [pid 1859] ioctl(21, KVM_SET_REGS, 0x7ffc98aac230) = 0  [pid 1859] ioctl(21, KVM_SET_XSAVE or KVM_SIGNAL_MSI, 0x7fbd30001000) = 0  [pid 1859] ioctl(21, KVM_PPC_ALLOCATE_HTAB or KVM_SET_XCRS, 0x7ffc98aac010) = 0  [pid 1859] ioctl(21, KVM_SET_SREGS, 0x7ffc98aac050) = 0  [pid 1859] ioctl(21, KVM_SET_MSRS, 0x7ffc98aab820) = 87  [pid 1859] ioctl(21, KVM_SET_MP_STATE, 0x7ffc98aac230) = 0  [pid 1859] ioctl(21, KVM_SET_LAPIC, 0x7ffc98aabd80) = 0  [pid 1859] ioctl(21, KVM_SET_MSRS, 0x7ffc98aac1b0) = 1  [pid 1859] ioctl(21, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS, 0x7ffc98aac1b0) = 0  [pid 1859] ioctl(21, KVM_SET_DEBUGREGS or KVM_SET_TSC_KHZ, 0x7ffc98aac1b0) = 0 Using systemtap again, I noticed that the main thread's run_delay is copied to last_steal only after a KVM_SET_MSRS ioctl which enables the steal time MSR is issued by the main thread (see linux 3.16.7-ckt11-1/arch/x86/kvm/x86.c:2162). Taking an educated guess, I reverted the following qemu commits:  commit 0e5035776df31380a44a1a851850d110b551ecb6  Author: Marcelo Tosatti <mtosatti@redhat.com>  Date: Tue Sep 3 18:55:16 2013 -0300      fix steal time MSR vmsd callback to proper opaque type      Convert steal time MSR vmsd callback pointer to proper X86CPU type.      Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>  commit 917367aa968fd4fef29d340e0c7ec8c608dffaab  Author: Marcelo Tosatti <mtosatti@redhat.com>  Date: Tue Feb 19 23:27:20 2013 -0300      target-i386: kvm: save/restore steal time MSR      Read and write steal time MSR, so that reporting is functional across      migration.      Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>      Signed-off-by: Gleb Natapov <gleb@redhat.com> and the steal time jump on migration went away. However, steal time was not reported at all after migration, which is expected after reverting 917367aa. So it seems that after 917367aa, the steal time MSR is correctly saved and copied to the receiving side, but then it is restored by the main thread (probably during cpu_synchronize_all_post_init()), causing the overflow when the vCPU threads are unpaused.
2016-03-10 12:17:59 Tim Gardner linux (Ubuntu): importance Undecided Medium
2016-03-10 12:18:16 Tim Gardner nominated for series Ubuntu Trusty
2016-03-10 12:18:16 Tim Gardner bug task added linux (Ubuntu Trusty)
2016-03-10 12:18:16 Tim Gardner nominated for series Ubuntu Vivid
2016-03-10 12:18:16 Tim Gardner bug task added linux (Ubuntu Vivid)
2016-03-10 12:18:48 Tim Gardner nominated for series Ubuntu Xenial
2016-03-10 12:18:48 Tim Gardner bug task added linux (Ubuntu Xenial)
2016-03-10 12:18:57 Tim Gardner linux (Ubuntu Xenial): status In Progress Fix Released
2016-03-10 12:19:03 Tim Gardner nominated for series Ubuntu Wily
2016-03-10 12:19:03 Tim Gardner bug task added linux (Ubuntu Wily)
2016-03-10 12:19:33 Tim Gardner linux (Ubuntu Trusty): status New In Progress
2016-03-10 12:19:33 Tim Gardner linux (Ubuntu Trusty): assignee Liang Chen (cbjchen)
2016-03-10 12:20:02 Tim Gardner linux (Ubuntu Vivid): status New In Progress
2016-03-10 12:20:02 Tim Gardner linux (Ubuntu Vivid): assignee Liang Chen (cbjchen)
2016-03-10 12:20:26 Tim Gardner linux (Ubuntu Wily): status New In Progress
2016-03-10 12:20:26 Tim Gardner linux (Ubuntu Wily): assignee Liang Chen (cbjchen)
2016-03-13 14:03:24 Launchpad Janitor branch linked lp:ubuntu/trusty-proposed/linux-lts-xenial
2016-03-30 20:07:20 Malte S. Stretz bug task added linux (Debian)
2016-03-31 15:47:14 Bug Watch Updater linux (Debian): status Unknown Fix Released
2016-05-17 21:55:36 Kamal Mostafa linux (Ubuntu Trusty): status In Progress Fix Committed
2016-05-17 21:55:46 Kamal Mostafa linux (Ubuntu Wily): status In Progress Fix Released
2016-05-17 21:56:47 Kamal Mostafa linux (Ubuntu Vivid): status In Progress Fix Released
2016-06-14 13:23:38 Simon Déziel bug added subscriber Simon Déziel
2016-06-14 14:20:04 Kamal Mostafa tags verification-needed-trusty
2016-06-15 10:29:08 Liang Chen tags verification-needed-trusty verification-done-trusty
2016-06-27 19:04:21 Launchpad Janitor linux (Ubuntu Trusty): status Fix Committed Fix Released
2016-06-27 19:04:21 Launchpad Janitor cve linked 2016-3134
2016-06-27 19:04:21 Launchpad Janitor cve linked 2016-4482
2016-06-27 19:04:21 Launchpad Janitor cve linked 2016-4565
2016-06-27 19:04:21 Launchpad Janitor cve linked 2016-4569
2016-06-27 19:04:21 Launchpad Janitor cve linked 2016-4578
2016-06-27 19:04:21 Launchpad Janitor cve linked 2016-4580
2016-06-27 19:04:21 Launchpad Janitor cve linked 2016-4913
2016-06-27 19:04:22 Launchpad Janitor linux (Ubuntu Trusty): status Fix Committed Fix Released