when using dedicated cpus, the guest topology doesn't match the host
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Compute (nova) |
Fix Released
|
Medium
|
Stephen Finucane | ||
Mitaka |
Fix Released
|
Undecided
|
Stephen Finucane |
Bug Description
According to "http://
"In the absence of an explicit vCPU topology request, the virt drivers typically expose all vCPUs as sockets with 1 core and 1 thread. When strict CPU pinning is in effect the guest CPU topology will be setup to match the topology of the CPUs to which it is pinned."
What I'm seeing is that when strict CPU pinning is in use the guest seems to be configuring multiple threads, even if the host doesn't have theading enabled.
As an example, I set up a flavor with 2 vCPUs and enabled dedicated cpus. I then booted up an instance of this flavor on two separate compute nodes, one with hyperthreading enabled and one with hyperthreading disabled. In both cases, "virsh dumpxml" gave the following topology:
<topology sockets='1' cores='1' threads='2'/>
When running on the system with hyperthreading disabled, this should presumably have been set to "cores=2 threads=1".
Taking this a bit further, even if hyperthreading is enabled on the host it would be more accurate to only specify multiple threads in the guest topology if the vCPUs are actually affined to multiple threads of the same host core. Otherwise it would be more accurate to specify the guest topology with multiple cores of one thread each.
Changed in nova: | |
importance: | Undecided → Medium |
status: | New → Confirmed |
Changed in nova: | |
assignee: | nobody → lyanchih (lyanchih) |
Changed in nova: | |
assignee: | Chung Chih, Hung (lyanchih) → nobody |
assignee: | nobody → Stephen Finucane (sfinucan) |
Changed in nova: | |
assignee: | Stephen Finucane (sfinucan) → Waldemar Znoinski (wznoinsk) |
Changed in nova: | |
assignee: | Waldemar Znoinski (wznoinsk) → Stephen Finucane (sfinucan) |
Hey Chris - thanks for the bug report!
would it be possible to get the actual resulting XML of running virsh capabilites on the two hosts (or at least the interesting bits about their CPUs and NUMA topology) and the resulting instance XMLs for each host (so that we can see which actual CPUs instances got pinned to).
Related: There are some patches in flight that change behaviour regarding exposing threads to guests starting at https:/ /review. openstack. org/#/c/ 229573/. It might be worth trying them out and seeing if they fix this problem.