nova libvirt pinning not reflected in VirtCPUTopology
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Compute (nova) |
Invalid
|
Undecided
|
Stephen Finucane |
Bug Description
Using a CPU policy of dedicated ('hw:cpu_
http://
When scheduling instance with this extra spec, it would be expected that the 'VirtCPUToplogy' object used by 'InstanceNumaCell' objects (which are in turn used by an 'InstanceNumaTo
VirtCPUTopo
VirtCPUTopo
VirtCPUTopo
...
In summary, cores * sockets * threads = vCPUs. However, this does not appear to happen.
---
# Testing Configuration
Testing was conducted on a single-node, Fedora 21-based (3.17.8-
openstack flavor create --ram 4096 --disk 20 --vcpus 10 demo.no-pinning
openstack flavor create --ram 4096 --disk 20 --vcpus 10 demo.pinning
nova flavor-key demo.pinning set hw:cpu_
# Results
Results vary - however, we have seen very random assignments like so:
For a three vCPU instance:
(Pdb) p instance.
VirtCPUTopo
For a four vCPU instance:
VirtCPUTopo
For a ten vCPU instance:
VirtCPUTopo
The actual underlying libvirt XML is correct, however:
For example, for a three vCPU instance:
<cputune>
<vcpupin vcp='0' cpuset='1'/>
<vcpupin vcp='1' cpuset='0'/>
<vcpupin vcp='2' cpuset='25'/>
</cputune>
UPDATE(23/06/15): The random assignments aren't actually random (thankfully). They correspond to the number of free cores in the system. The reason they change is because the number of cores is changing (as pinned CPUs deplete resources). However, I still don't think this is correct/logical.
Changed in nova: | |
assignee: | nobody → Stephen Finucane (sfinucan) |
description: | updated |
Changed in nova: | |
milestone: | none → liberty-3 |
status: | Fix Committed → Fix Released |
Changed in nova: | |
milestone: | liberty-3 → 12.0.0 |
@Stephen Finucane (sfinucan):
Since you are set as assignee, I switch the status to "In Progress".