CPU topology does not honour cpu_max_threads when NUMA topology is used
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Compute (nova) |
Fix Released
|
High
|
Daniel Berrange |
Bug Description
Consider a flavour with 4 vcpus, and image property hw:cpu_threads==2. Now the NUMA topology may prefer a thread count of 4, but best of the thread preference set, the guest will end up with 2 sockets, 1 core and 2 threads.
Now consider a flavour with 4 vcpus, and image property hw:cpu_
The current code in nova.virt.
Meanwhile if the vCPU count is 6 and the NUMA topology prefers a thread count of 4, then the code is actually incapable of coming up with a valid topology because it only considers topologies with an exact thread count of 4 and 4 does not divide into 6.
nova.exception.
which is clearly a bogus error as you can easily satisfy that with 6:1:1, 1:6:1, 1:1:6, 2:1:3, 2:3:1, 3:2:1, 3:1:2, 1:2:3, 1:3:2, so it should have picked 2:1:3, as that has thread count closest matching the NUMA topology and the maximum socket count.
Changed in nova: | |
importance: | Undecided → High |
assignee: | nobody → Daniel Berrange (berrange) |
status: | New → In Progress |
Changed in nova: | |
milestone: | none → liberty-2 |
status: | Fix Committed → Fix Released |
Changed in nova: | |
milestone: | liberty-2 → 12.0.0 |
Reviewed: https:/ /review. openstack. org/198312 /git.openstack. org/cgit/ openstack/ nova/commit/ ?id=f396826314b 9f37eb57151f0dd 8a8e3b7d8a8a5c
Committed: https:/
Submitter: Jenkins
Branch: master
commit f396826314b9f37 eb57151f0dd8a8e 3b7d8a8a5c
Author: Daniel P. Berrange <email address hidden>
Date: Fri Jul 3 11:29:03 2015 +0100
virt: fix picking CPU topologies based on desired NUMA topology
The _get_possible_ cpu_topologies( ) method is intended to return max_sockets, cpu_max_cores & cpu_max_threads image/flavour
a list of CPU topologies that honour the constraints set by the
cpu_
properties. In
commit 770ab8eeb72b184 ac6164aeabb89c4 bf45f938a9
Author: Nikola Dipanov <email address hidden>
Date: Mon Dec 8 14:45:48 2014 +0100
Make get_best_ cpu_topology consider NUMA requested CPU topology
the code was changed so that the desired thread count from the
NUMA topology was passed in. Unfortunately the logic implemented
was flawed because it while the cpu_threads preferred threads
count was honour, it would not honour the cpu_max_threads
value.
So if you have
vcpus=4 cpu_threads = 2
hw_
NUMA threads == 4
It would return CPU topologies with 2 threads, but if you had
vcpus=4 cpu_max_ threads = 2
hw_
NUMA threads == 4
then it would return CPU topologies with 4 threads, which
violates the users request.
If you had a vcpu count not a multiple of the NUMA threads
vcpus=6
NUMA threads = 4
then it would be incapable of determining any topology
as it only looked for exact matching threads and 4 does
not divide into 6.
The _threads_ requested_ by_user( ) method has a typo in it
causing it to look for whether 'cpu_maxthreads' exists,
but correcting that to 'cpu_max_threads' does not do
anything to fix the actual behaviour, as the max threads
value is never acted upon.
If we clamped the 'min_requested_ threads' value calculated cpu_topologies( ) to cpu_max_threads, the
in _get_desirable_
code still doesn't work, as the max threads count may not
actually be a multiple of the vCPU count. We need to
consider which topologies have a threads count that is
closest to the desired NUMA threads, but not exceeding
the max threads count.
To solve this properly we revert the changes to the possible_ cpu_topologies( ) method, so that it only cores/threads
_get_
ever considers the user provided max sockets/
value from the image/flavour as it originally did.
We then introduce a _filter_ for_numa_ threads( ) method
which filters the list of possible topologies, to those
which most closely match the desired number of threads
from the NUMA topology.
Finally we do the sorting based on the preferred cores/threads
topologies defined by the cpu_sockets/
image/flavour properties
Closes-bug: #1471187 0acb61370ee5135 5226366a4c9
Change-Id: I05f24640152617