2015-06-11 15:22:22 |
Dave Johnston |
description |
I have a system with 32 cores (2 sockets, 8 cores, hyperthreading enabled).
The NUMA topology as follows:
numactl --hardware
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7 16 17 18 19 20 21 22 23
node 0 size: 65501 MB
node 0 free: 38562 MB
node 1 cpus: 8 9 10 11 12 13 14 15 24 25 26 27 28 29 30 31
node 1 size: 65535 MB
node 1 free: 63846 MB
node distances:
node 0 1
0: 10 20
1: 20 10
I have defined an flavor in Openstack with 12 vcpus as follows:
nova flavor-show c4.3xlarge
+----------------------------+------------------------------------------------------+
| Property | Value |
+----------------------------+------------------------------------------------------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 40 |
| extra_specs | {"hw:cpu_policy": "dedicated", "hw:numa_nodes": "1"} |
| id | 1d76a225-90c1-4f6f-a59b-000795c33e63 |
| name | c4.3xlarge |
| os-flavor-access:is_public | True |
| ram | 24576 |
| rxtx_factor | 1.0 |
| swap | 8192 |
| vcpus | 12 |
+----------------------------+------------------------------------------------------+
I expect to be able to launch two instances of this flavor on the 32 core host, one contained within each NUMA node.
When I launch two instances, the first succeeds, but the second fails. The instance xml is attached, along with the system capabilities.
If I change hw:numa_nodes = 2, then I can launch two copies of the instance.
N.B for the purposes of testing I have disabled all vcpu_pin and isolcpu settings. |
I have a system with 32 cores (2 sockets, 8 cores, hyperthreading enabled).
The NUMA topology as follows:
numactl --hardware
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7 16 17 18 19 20 21 22 23
node 0 size: 65501 MB
node 0 free: 38562 MB
node 1 cpus: 8 9 10 11 12 13 14 15 24 25 26 27 28 29 30 31
node 1 size: 65535 MB
node 1 free: 63846 MB
node distances:
node 0 1
0: 10 20
1: 20 10
I have defined an flavor in Openstack with 12 vcpus as follows:
nova flavor-show c4.3xlarge
+----------------------------+------------------------------------------------------+
| Property | Value |
+----------------------------+------------------------------------------------------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 40 |
| extra_specs | {"hw:cpu_policy": "dedicated", "hw:numa_nodes": "1"} |
| id | 1d76a225-90c1-4f6f-a59b-000795c33e63 |
| name | c4.3xlarge |
| os-flavor-access:is_public | True |
| ram | 24576 |
| rxtx_factor | 1.0 |
| swap | 8192 |
| vcpus | 12 |
+----------------------------+------------------------------------------------------+
I expect to be able to launch two instances of this flavor on the 32 core host, one contained within each NUMA node.
When I launch two instances, the first succeeds, but the second fails. The instance xml is attached, along with the system capabilities.
If I change hw:numa_nodes = 2, then I can launch two copies of the instance.
N.B for the purposes of testing I have disabled all vcpu_pin and isolcpu settings.
This was tested on RDO Kilo running on CentOS 7.
I had to upgrade the hypervisor with packages from the ovirt master branch in order to support NUMA pinning. |
|