nova libvirt pinning won't work across numa nodes

Bug #1567347 reported by Ksenia Svechnikova
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Mirantis OpenStack
Status tracked in 10.0.x
10.0.x
Confirmed
Wishlist
Sergey Nikitin

Bug Description

Duplicate of bug https://bugs.launchpad.net/nova/+bug/1438253

Description of the environment:

NUMA topology4 NUMA nodes
ID0
CPU IDS 0, 20, 1, 21, 2, 22, 3, 23, 4, 24
MEMORY31.3 GB
ID1
CPU IDS 5, 25, 6, 26, 7, 27, 8, 28, 9, 29
MEMORY31.5 GB
ID2
CPU IDS 10, 30, 11, 31, 12, 32, 13, 33, 14, 34
MEMORY31.5 GB
ID3
CPU IDS 15, 35, 16, 36, 17, 37, 18, 38, 19, 39
MEMORY31.5 GB

Steps to reproduce:

1. Prepare cluster with 2+1 nodes on HW with VLAN and available NUMA nodes on one node
2. Assign compute role to the node with NUMA
3. Update nova cpu pinning for the nova compute with 12 Pined CPU
4. numa node (10 CPU for each), 2 CPU for nova
  Assign other compute role to the node with NUMA

5. Deploy cluster
6. Check nova.conf on compute node with NUMA. It should contain numa nodes from one numas:
vcpu_pin_set=0,20,1,21,2,22,3,23,4,24,5,25 (from 2 NUMAs)
7. Check filter AggregateInstanceExtraSpecFilter and NUMATopologyFilter is enable at nova-scheduler on compute node
8. Run OSTF test
9. Сreate aggregates for instances with cpu pinning

nova aggregate-create performance_2_cpu
nova aggregate-set-metadata performance_2_cpu pinned=true

10. Add one host to the new aggregates

nova aggregate-add-host performance_2_cpu node-2.test.domain.local

11. Create new flavors for VMs that require CPU pinning from one NUMA:

nova flavor-create m1.small.performance_5 auto 2048 20 5
nova flavor-key m1.small.performance_2 set hw:cpu_policy=dedicated aggregate_instance_extra_specs:pinned=true

12. Create instance with flavor

13. Create new flavors for VMs that require CPU pinning from two NUMAs:
nova flavor-create m1.small.performance_11 auto 2048 20 11
nova flavor-key m1.small.performance_11 set hw:cpu_policy=dedicated aggregate_instance_extra_specs:pinned=true

nova flavor-create m1.small.performance_12 auto 2048 20 12
nova flavor-key m1.small.performance_12 set hw:cpu_policy=dedicated aggregate_instance_extra_specs:pinned=true

Expected results: VMs use CPUs from 2 NUMAs

Actual result: If we pin CPUs from 2 and more NUMA nodes, user doesn't have opportunity to boot VMs that will use CPUs from that nodes, only from one

Reproducibility: 100%

Additional information: also try with "hw:numa_nodes": "2". Got "No valid host" error

description: updated
Changed in mos:
milestone: none → 9.0
tags: added: area-nova
Changed in mos:
status: New → Confirmed
Revision history for this message
Ksenia Svechnikova (kdemina) wrote :

Snapshot: https://drive.google.com/file/d/0B2v38w72jlwTa3czMm5pTHVlbmc/view?usp=sharing

Please be aware that there are 2 clustes. For this case check the logs for env 2 (node-2, node-5, node-6)

Changed in mos:
assignee: MOS Nova (mos-nova) → Sergey Nikitin (snikitin)
Revision history for this message
Przemyslaw Czesnowicz (pczesno) wrote :
Revision history for this message
Sergey Nikitin (snikitin) wrote :

I confirm this problem, but it works as designed. To fix it we need to implement a new feature in Nova. So we can't fix in MOS 9.0, because Mitaka was already released.

Changed in mos:
status: Confirmed → Won't Fix
Revision history for this message
Sergey Matov (smatov) wrote :

Hello folks.

In addition to described problem here should be added that using OVS+DPDK deployment vCPU pinning from different NUMA vhost-user interface won't work. Currently I am debugging this.

tags: added: 10.0-reviewed
Revision history for this message
Sergey Nikitin (snikitin) wrote :

it's a feature request

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.