NUMA scheduling will not attempt to pack an instance onto a host
Bug #1386236 reported by
Andrew Theurer
This bug affects 2 people
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Compute (nova) |
Fix Released
|
High
|
Nikola Đipanov | ||
Juno |
Fix Released
|
High
|
Nikola Đipanov |
Bug Description
When creating a flavor which includes "hw:numa_nodes": "1", all instances booted with this flavor are always pinned to NUMA node0. Multiple instances end up on node0 and no instances are on node1. Our expectation was that instances would be balanced across NUMA nodes.
To recreate:
1) Ensure you have a compute node with at least 2 sockets
2) Create a flavor with vcpus and memory which fits within one socket
3) Add the flavor key: nova flavor-key <flavor> set hw:numa_nodes=1
4) Boot more than 1 instances
5) Verify where the vcpus are pinned
Changed in nova: | |
status: | New → Confirmed |
importance: | Undecided → High |
summary: |
- NUMA scheduling broken + NUMA scheduling will not attempt to pack an instance onto a host |
Changed in nova: | |
assignee: | Nikola Đipanov (ndipanov) → sahid (sahid-ferdjaoui) |
Changed in nova: | |
assignee: | sahid (sahid-ferdjaoui) → Nikola Đipanov (ndipanov) |
Changed in nova: | |
milestone: | none → kilo-1 |
status: | Fix Committed → Fix Released |
Changed in nova: | |
milestone: | kilo-1 → 2015.1.0 |
To post a comment you must log in.
The current NUMA code in Juno has a mistakenly limited bit of logic whereby guest NUMA node N is *always* placed on host NUMA node N. Talking with Nikola, this should be fairly straightforward to rectify and he indicates he'll fix this while working on the CPU pinning work.