So while not perfect, this is not a problem in practice, on the compute node, as Claim class will always re-calculate the NUMA topology based on the instance bits so setting it makes no difference for the compute. See:
Also in filters - we set it on the instance_dict that is part of the request_spec - this never even makes it to the compute nodes.
That said - this bug does make consume_from_instance() account for the potentially wrong topology (it subtracts the one calculated from the last potential host which may or may not be the one that was chose and that we are consuming it from). This in turn can cause requests for multiple instances with NUMA to fail and hit retry more than they need to.
So it is definitely a bug, just it is limited to exibiting itself only for multiple requests.
So while not perfect, this is not a problem in practice, on the compute node, as Claim class will always re-calculate the NUMA topology based on the instance bits so setting it makes no difference for the compute. See:
https:/ /github. com/openstack/ nova/blob/ 2176ba7881e4cca e107bb6e614f885 4b87f60a65/ nova/compute/ manager. py#L2175
Also in filters - we set it on the instance_dict that is part of the request_spec - this never even makes it to the compute nodes.
That said - this bug does make consume_ from_instance( ) account for the potentially wrong topology (it subtracts the one calculated from the last potential host which may or may not be the one that was chose and that we are consuming it from). This in turn can cause requests for multiple instances with NUMA to fail and hit retry more than they need to.
So it is definitely a bug, just it is limited to exibiting itself only for multiple requests.