libvirt+KVM: High CPU usage on Windows 10 (1803) guests
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Compute (nova) |
Fix Released
|
Medium
|
Jack Ding |
Bug Description
One of our clients recently started to use Windows 10 (update 1803) guest instances and reported very "slow responsiveness" of those instances. E.g. the boot up times are in a range of minutes whereas older Windows instances boot up in seconds.
After some tests with plain libvirt I cloud relate this issue to the following bug [1] (https:/
[1] and [2] suggests to enable Libvirt hyperv features 'synic' and 'stimer':
<features>
<hyperv>
<synic state='on'/>
<stimer state='on'/>
...
</hyperv>
...
</features>
However, since our on-prem environment is still running on Ocata on Ubuntu 16.04 I'm not able to use those settings on that environment. The only way to workaround that issues is enabling the 'HPET' timer:
<clock ...>
<timer 'hpet' present='yes'/>
...
</clock>
whereas Nova disables this by default.
Having HPET configurable is already requested and discussed by blueprint [6] (https:/
Hopefully this is useful for anybody facing the same issues.
Environment
===========
$> lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.5 LTS
Release: 16.04
Codename: xenial
$> dpkg -l | grep nova
ii nova-api 2:15.1.
ii nova-common 2:15.1.
ii nova-compute 2:15.1.
ii nova-compute-kvm 2:15.1.
ii nova-compute-
ii nova-conductor 2:15.1.
ii nova-consoleauth 2:15.1.
ii nova-novncproxy 2:15.1.
ii nova-placement-api 2:15.1.
ii nova-scheduler 2:15.1.
ii python-nova 2:15.1.
ii python-novaclient 2:7.1.0-
$> dpkg -l | grep qemu
ii ipxe-qemu 1.0.0+git-
ii qemu-block-
ii qemu-kvm 1:2.8+dfsg-
ii qemu-system-common 1:2.8+dfsg-
ii qemu-system-x86 1:2.8+dfsg-
ii qemu-utils 1:2.8+dfsg-
References
==========
[1] https:/
[2] https:/
[3] https:/
[4] https:/
[5] https:/
[6] https:/
[7] https:/
Thanks for this very nicely detailed bug report, it sounds like you're OK with your solution and this should ultimately be resolved with the hpet blueprint in stein which you're already aware of. Given that, we'll likely close this as part of that blueprint since I'm not sure what kind of backportable fix we'd have for this.