Activity log for bug #1466780

Date Who What changed Old value New value Message
2015-06-19 09:00:29 Stephen Finucane bug added bug
2015-06-19 09:02:05 Stephen Finucane bug added subscriber Przemyslaw Czesnowicz
2015-06-19 09:02:19 Stephen Finucane bug added subscriber sean mooney
2015-06-19 09:02:34 Stephen Finucane nova: assignee Stephen Finucane (sfinucan)
2015-06-23 09:30:39 Stephen Finucane description Using a CPU policy of dedicated ('hw:cpu_policy=dedicated') results in vCPUs being pinned to pCPUs, per the original blueprint: http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/virt-driver-cpu-pinning.html When scheduling instance with this extra spec, it would be expected that the 'VirtCPUToplogy' object used by 'InstanceNumaCell' objects (which are in turn used by an 'InstanceNumaTopology' object) should bear some reflection on the actual configuration. For example, a VM booted with four vCPUs and the 'dedicated' CPU policy should have NUMA topologies similar to one of the below: VirtCPUTopology(cores=4,sockets=1,threads=1) VirtCPUTopology(cores=2,sockets=1,threads=2) VirtCPUTopology(cores=1,sockets=2,threads=2) ... In summary, cores * sockets * threads = vCPUs. However, this does not appear to happen. --- # Testing Configuration Testing was conducted on a single-node, Fedora 21-based (3.17.8-300.fc21.x86_64) OpenStack instance (built with devstack). The system is a dual-socket, 10 core, HT-enabled system (2 sockets * 10 cores * 2 threads = 40 "pCPUs". 0-9,20-29 = node0, 10-19,30-39 = node1). Two flavors were used: openstack flavor create --ram 4096 --disk 20 --vcpus 10 demo.no-pinning openstack flavor create --ram 4096 --disk 20 --vcpus 10 demo.pinning nova flavor-key demo.pinning set hw:cpu_policy=dedicated hw:cpu_threads_policy=separate # Results Results vary - however, we have seen very random assignments like so: For a three vCPU instance: (Pdb) p instance.numa_topology.cells[0].cpu_topology VirtCPUTopology(cores=10,sockets=1,threads=1) For a four vCPU instance: VirtCPUTopology(cores=2,sockets=1,threads=2) For a ten vCPU instance: VirtCPUTopology(cores=7,sockets=1,threads=2) The actual underlying libvirt XML is correct, however: For example, for a three vCPU instance: <cputune> <shares>3072</shares> <vcpupin vcp='0' cpuset='1'/> <vcpupin vcp='1' cpuset='0'/> <vcpupin vcp='2' cpuset='25'/> </cputune> Using a CPU policy of dedicated ('hw:cpu_policy=dedicated') results in vCPUs being pinned to pCPUs, per the original blueprint:     http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/virt-driver-cpu-pinning.html When scheduling instance with this extra spec, it would be expected that the 'VirtCPUToplogy' object used by 'InstanceNumaCell' objects (which are in turn used by an 'InstanceNumaTopology' object) should bear some reflection on the actual configuration. For example, a VM booted with four vCPUs and the 'dedicated' CPU policy should have NUMA topologies similar to one of the below:     VirtCPUTopology(cores=4,sockets=1,threads=1)     VirtCPUTopology(cores=2,sockets=1,threads=2)     VirtCPUTopology(cores=1,sockets=2,threads=2)     ... In summary, cores * sockets * threads = vCPUs. However, this does not appear to happen. --- # Testing Configuration Testing was conducted on a single-node, Fedora 21-based (3.17.8-300.fc21.x86_64) OpenStack instance (built with devstack). The system is a dual-socket, 10 core, HT-enabled system (2 sockets * 10 cores * 2 threads = 40 "pCPUs". 0-9,20-29 = node0, 10-19,30-39 = node1). Two flavors were used:     openstack flavor create --ram 4096 --disk 20 --vcpus 10 demo.no-pinning     openstack flavor create --ram 4096 --disk 20 --vcpus 10 demo.pinning     nova flavor-key demo.pinning set hw:cpu_policy=dedicated hw:cpu_threads_policy=separate # Results Results vary - however, we have seen very random assignments like so: For a three vCPU instance:     (Pdb) p instance.numa_topology.cells[0].cpu_topology     VirtCPUTopology(cores=10,sockets=1,threads=1) For a four vCPU instance:     VirtCPUTopology(cores=2,sockets=1,threads=2) For a ten vCPU instance:     VirtCPUTopology(cores=7,sockets=1,threads=2) The actual underlying libvirt XML is correct, however: For example, for a three vCPU instance:     <cputune>         <shares>3072</shares>         <vcpupin vcp='0' cpuset='1'/>         <vcpupin vcp='1' cpuset='0'/>         <vcpupin vcp='2' cpuset='25'/>     </cputune> UPDATE(23/06/15): The random assignments aren't actually random (thankfully). They correspond to the number of free cores in the system. The reason they change is because the number of cores is changing (as pinned CPUs deplete resources). However, I still don't think this is correct/logical.
2015-06-24 15:26:50 Markus Zoeller (markus_z) nova: status New In Progress
2015-08-15 04:53:54 OpenStack Infra nova: status In Progress Fix Committed
2015-09-03 11:51:21 Thierry Carrez nova: status Fix Committed Fix Released
2015-09-03 11:51:21 Thierry Carrez nova: milestone liberty-3
2015-10-15 09:02:00 Thierry Carrez nova: milestone liberty-3 12.0.0
2016-02-24 10:51:12 Stephen Finucane nova: status Fix Released Invalid