instances always packed to NUMA node 0

Bug #1601994 reported by Rafael Folco
18
This bug affects 3 people
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Invalid
Undecided
Unassigned

Bug Description

Description
===========
Instances are packed into node0, always. NUMA placement criteria is undefined when not using CPU pinning and hw:numa_nodes=1.

Steps to reproduce
==================
Create a flavor w/ hw:numa_nodes=1 (hw:cpu_policy unset)

Spawn multiple instances

Check nodeset in the instance XML

Expected result
===============

Use all NUMA nodes by applying some NUMA placement criteria: Spread, pack or random

Actual result
=============
Only node 0 is used. All others are unused.

Environment
===========

Ubuntu Xenial 16.10, OpenStack Mitaka release, Libvirt 1.3.1

Note: This issue has been found / tested on Ubuntu KVM on Power (ppc64le arch), however, it affects all architectures.

Tags: numa
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to nova (master)

Fix proposed to branch: master
Review: https://review.openstack.org/340569

Changed in nova:
assignee: nobody → Rafael Folco (rafaelfolco)
status: New → In Progress
Matt Riedemann (mriedem)
tags: added: numa
Revision history for this message
Daniel Berrange (berrange) wrote :

Nova fully fills each node before considering placing a guest on the next node. This isn't a bug - its expected behaviour intended to maximise number of guests that can be packed onto each compute host. If you want to suggest alternative strategies please submit a blueprint + spec with the proposed design & rational

Changed in nova:
status: In Progress → Invalid
Changed in nova:
status: Invalid → Confirmed
Revision history for this message
Rafael Folco (rafaelfolco) wrote :

This bug can be reproduced with:
NUMATopologyFilter unset OR --availability-zone zone:host
AND
hw:cpu_policy unset AND hw:numa_nodes=1

The problem in skipping filter logic is discussed here:
https://bugs.launchpad.net/nova/+bug/1427772
http://lists.openstack.org/pipermail/openstack-dev/2015-February/056695.html

The side effects for NUMA placement is also reported here:
https://bugzilla.redhat.com/show_bug.cgi?id=1189906

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Fix proposed to branch: master
Review: https://review.openstack.org/346133

Changed in nova:
status: Confirmed → In Progress
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on nova (master)

Change abandoned by Rafael Folco (<email address hidden>) on branch: master
Review: https://review.openstack.org/340569
Reason: in favor of https://review.openstack.org/346133

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Change abandoned by Rafael Folco (<email address hidden>) on branch: master
Review: https://review.openstack.org/346133

Changed in nova:
status: In Progress → Invalid
assignee: Rafael Folco (rafaelfolco) → nobody
Revision history for this message
Gustavo Randich (gustavo-randich) wrote :

I look forward to a solution for this issue, because we use an in-house/custom-made scheduler, forcing the individual host to boot the VM on via availability-zone nova:host, but still need to spread the load across NUMA nodes for flavors with hw:numa_nodes=1

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.