hyperthreading bug in NUMATopologyFilter

Bug #1602814 reported by Chris Friesen on 2016-07-13
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)

Bug Description

I recently ran into an issue where I was trying to boot an instance with 8 vCPUs, with hw:cpu_policy=dedicated. The host had 8 pCPUs available, but they were a mix of siblings and non-siblings.

In virt.hardware._pack_instance_onto_cores(), the _get_pinning() function seems to be the culprit. It was called with the following inputs:

(Pdb) threads_no
(Pdb) sibling_set
[CoercedSet([63]), CoercedSet([49]), CoercedSet([48]), CoercedSet([50]), CoercedSet([59, 15]), CoercedSet([18, 62])]
(Pdb) instance_cell.cpuset
CoercedSet([0, 1, 2, 3, 4, 5, 6, 7])

As we can see, we are looking for 8 vCPUs, and there are 8 pCPUs available. However, when we call _get_pinning() it doesn't give us a mapping:

> /usr/lib/python2.7/site-packages/nova/virt/hardware.py(899)_pack_instance_onto_cores()
-> pinning = _get_pinning(threads_no, sibling_set,
(Pdb) n
> /usr/lib/python2.7/site-packages/nova/virt/hardware.py(900)_pack_instance_onto_cores()
-> instance_cell.cpuset)
(Pdb) n
> /usr/lib/python2.7/site-packages/nova/virt/hardware.py(901)_pack_instance_onto_cores()
-> if pinning:
(Pdb) pinning

This is a bug, if we haven't specified anything regarding hyperthreading then we should be able to run with a mix of siblings and non-siblings.

Matt Riedemann (mriedem) on 2016-07-13
tags: added: numa
Chris Friesen (cbf123) wrote :

There is a proposed patch for this issue at https://review.openstack.org/#/c/342709

Not clear why it hasn't been linked automatically.

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers