Comment 2 for bug 1910466

Revision history for this message
Stephen Finucane (stephenfinucane) wrote :

I've discussed this on the review, but I think this is _almost_ working as expected. In that review, Sean is removing the code that leans on the threading information stored in the guest's NUMA topology when it was generated. As noted there, I think this code still serves a purpose:

  My understanding of this code is that it allows us to best map the host topology to the guest
  topology. Consider a 32 core CPU with hyperthreading (i.e. 16 + 16). If you boot a 4 core guest
  and those cores are running on cores 0,1,16,17, then exposing CPU threads inside the guest best
  allows that OS to optimize for the case.

While you're correct that the request expressed no preference for number of threads, the request for a NUMA topology meant we're getting one (of 1) implicitly. I think the bug here is that this implicit request really only makes sense if the user doesn't specify any CPU topology information and we should be outright ignoring information from 'NUMATopologyCell.cpu_topology' (i.e. [1]) if they have. The fix would be to start doing that, but I don't know how easy that is.

[1] https://github.com/openstack/nova/blob/e6f5e814050a19d6f027037424556b2889514ec3/nova/virt/hardware.py#L616-L636