[libvirt] When guest configured for threads, poor VCPU accounting

Bug #1355921 reported by Jon Grimm
10
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Opinion
Wishlist
Unassigned

Bug Description

Noticed while testing: https://blueprints.launchpad.net/nova/+spec/virt-driver-vcpu-topology

I have a host advertising 16 VCPUs (2 sockets, each 8 cores). Each core happens to have 8 threads. (This is on a beefy POWER8 system). With the above blueprint, I can now create a 1 socket, 2 core, 8 thread guest.

All works fine, except that I noticed "Free VCPUS: 0' even though I'm really only using two cores. I'd think I would see 14 free VCPUs in this scenario.

Guest lscpu output:
[root@bare-precise ~]# lscpu
Architecture: ppc64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Big Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 8
Core(s) per socket: 2
Socket(s): 1
NUMA node(s): 1
Model: IBM pSeries (emulated by qemu)
L1d cache: 64K
L1i cache: 32K
NUMA node0 CPU(s): 0-15

Resulting tracker
2014-08-12 12:17:18.874 96650 AUDIT nova.compute.resource_tracker [-] Free VCPUS: 0

Tags: libvirt ppc
Joe Gordon (jogo)
tags: added: libvirt
Revision history for this message
ugvddm (271025598-9) wrote :

Hi Jon:

I have tested your issue, but I can't reproduce it. I use below flavor to create a vm with 3vcpus in a host with 8 vcpus

+----------------------------+----------------------------------------------+
| Property | Value |
+----------------------------+----------------------------------------------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 1 |
| extra_specs | {"hw:cpu_cores": "1", "hw:cpu_sockets": "3"} |
| id | 4c8ffddf-1a07-4aea-bb44-687fc9c6ae46 |
| name | m1.tiny |
| os-flavor-access:is_public | True |
| ram | 512 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 3 |
+----------------------------+----------------------------------------------+

then , I can see the extra_specs whic I set work to vm:
/usr/bin/kvm-spice -S -M pc-1.1 -enable-kvm -m 512 -smp 3,sockets=3,cores=1,threads=1 -name instance-00000004 -uuid b7198295-3667-4abe-b9d4-07fb5e977550 -smbios type=1,manufacturer=OpenStack Foundation,product=OpenStack ......

In additon, I see below log from nova-compute.log:
AUDIT nova.compute.resource_tracker [-] Free VCPUS: 5

Changed in nova:
status: New → Invalid
Revision history for this message
Jon Grimm (jgrimm) wrote :

Hi there. You didn't actually setup a configuration that would create a VM with threads which was the condition I wrote the bug against.

" smp 3,sockets=3,cores=1,threads=1 "

Thanks!

Changed in nova:
status: Invalid → New
Revision history for this message
Jon Grimm (jgrimm) wrote :

What I believe is happening: The hypervisor treats available VCPUs as sockets*cores. But when threads are assigned for VCPUs (via mechanisms in new blueprint).. that's what throws off the accounting, as it simply substracts guest VCPUs from the overall total VCPUs available. Suggestion would be to only count guest cores against the accounting (since that's what the hypervisor is calling an available VCPU.

Revision history for this message
Sean Dague (sdague) wrote :

Marking this as wishlist because it's really an enhancement based on the fact that power processors use a very different topology than we're expecting.

Changed in nova:
status: New → Confirmed
importance: Undecided → Wishlist
Revision history for this message
Jon Grimm (jgrimm) wrote :

Hey Sean, did you test that it behaved properly on x86? It wasn't clear from your triage comment that you are just assuming that its ppc only or had actuallly tested it. I didn't have one handy at the time, but my reason for opening bug was partly to see if it really was a ppc ism or not.

Revision history for this message
Jon Grimm (jgrimm) wrote :

Finally got around to setting up x86 to try it there.

Same behavior.

Created a 8VCPU guest with 4 sockets, 1 core/socket, 2 threads/core (My physical topology on the x86 system is 4 core, 2thread/core. As expected per original bug write-up.. it consumed 8VCPUs.

That being said, it still may be wishlist bug, but its not a power-ism as asserted in https://bugs.launchpad.net/nova/+bug/1355921/comments/4 --- same behavior on x86

Revision history for this message
Jon Grimm (jgrimm) wrote :

Doh. Ignore my comment #6. Just realized that this is indeed different than on x86 and likely a powerism. Recanting for now (and I should have seen sooner) as there is indeed an oddity in host OS view of CPU topology when SMT=off on the host (basically anything but first thread on a core shows as offline.

Changed in nova:
assignee: nobody → Jon Grimm (jgrimm)
Changed in nova:
assignee: Jon Grimm (jgrimm) → nobody
Revision history for this message
Markus Zoeller (markus_z) (mzoeller) wrote :

This wishlist bug has been open a year without any activity. I'm going to move it to "Opinion / Wishlist", which is an easily-obtainable queue of older requests that have come on.

In case you want to work on that, consider writing a blueprints [1] and spec [2]. I'll recommend to read [3] if not yet done. The effort to implement the requested feature is then driven only by the blueprint (and spec).

References:
[1] https://blueprints.launchpad.net/nova/
[2] https://github.com/openstack/nova-specs
[3] https://wiki.openstack.org/wiki/Blueprints

tags: added: ppc
Changed in nova:
status: Confirmed → Opinion
Revision history for this message
Rafael Folco (rafaelfolco) wrote :

libvirt-python reports number of cores instead of vcpus for power when using SMT=off:
https://github.com/libvirt/libvirt-python/blob/master/libvirt-override.c#L2770

Libvirt needs to report the number of hardware threads in the machine as the number of vcpus available.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.