When scheduling instance with this extra spec, it would be expected that the 'VirtCPUToplogy' object used by 'InstanceNumaCell' objects (which are in turn used by an 'InstanceNumaTopology' object) should bear some reflection on the actual configuration. For example, a VM booted with four vCPUs and the 'dedicated' CPU policy should have NUMA topologies similar to one of the below:
In summary, cores * sockets * threads = vCPUs. However, this does not appear to happen.
---
# Testing Configuration
Testing was conducted on a single-node, Fedora 21-based (3.17.8-300.fc21.x86_64) OpenStack instance (built with devstack). The system is a dual-socket, 10 core, HT-enabled system (2 sockets * 10 cores * 2 threads = 40 "pCPUs". 0-9,20-29 = node0, 10-19,30-39 = node1). Two flavors were used:
Using a CPU policy of dedicated ('hw:cpu_ policy= dedicated' ) results in vCPUs being pinned to pCPUs, per the original blueprint:
http:// specs.openstack .org/openstack/ nova-specs/ specs/kilo/ implemented/ virt-driver- cpu-pinning. html
When scheduling instance with this extra spec, it would be expected that the 'VirtCPUToplogy' object used by 'InstanceNumaCell' objects (which are in turn used by an 'InstanceNumaTo pology' object) should bear some reflection on the actual configuration. For example, a VM booted with four vCPUs and the 'dedicated' CPU policy should have NUMA topologies similar to one of the below:
VirtCPUTopo logy(cores= 4,sockets= 1,threads= 1) logy(cores= 2,sockets= 1,threads= 2) logy(cores= 1,sockets= 2,threads= 2)
VirtCPUTopo
VirtCPUTopo
...
In summary, cores * sockets * threads = vCPUs. However, this does not appear to happen.
---
# Testing Configuration
Testing was conducted on a single-node, Fedora 21-based (3.17.8- 300.fc21. x86_64) OpenStack instance (built with devstack). The system is a dual-socket, 10 core, HT-enabled system (2 sockets * 10 cores * 2 threads = 40 "pCPUs". 0-9,20-29 = node0, 10-19,30-39 = node1). Two flavors were used:
openstack flavor create --ram 4096 --disk 20 --vcpus 10 demo.no-pinning
openstack flavor create --ram 4096 --disk 20 --vcpus 10 demo.pinning policy= dedicated hw:cpu_ threads_ policy= separate
nova flavor-key demo.pinning set hw:cpu_
# Results
Results vary - however, we have seen very random assignments like so:
For a three vCPU instance:
(Pdb) p instance. numa_topology. cells[0] .cpu_topology logy(cores= 10,sockets= 1,threads= 1)
VirtCPUTopo
For a four vCPU instance:
VirtCPUTopo logy(cores= 2,sockets= 1,threads= 2)
For a ten vCPU instance:
VirtCPUTopo logy(cores= 7,sockets= 1,threads= 2)
The actual underlying libvirt XML is correct, however:
For example, for a three vCPU instance:
<cputune>
<shares> 3072</shares>
<vcpupin vcp='0' cpuset='1'/>
<vcpupin vcp='1' cpuset='0'/>
<vcpupin vcp='2' cpuset='25'/>
</cputune>