When boot more than one instance with accelerator, and the accelerators are in one compute node, there will be two problems as below:
One problem is as we always get the first item(alloc_reqs[0]) in alloc_reqs, when we iterator the second instance, it will throw conflict exception when putting the allocations.
Another is as we always get the first item in alloc_reqs_by_rp_uuid.get(selected_host.uuid), the selected_alloc_req is always stable, that will cause the values in selections_to_return are same . In fact, it's not right for subsequent operations.
When boot more than one instance with accelerator, and the accelerators are in one compute node, there will be two problems as below:
One problem is as we always get the first item(alloc_reqs[0]) in alloc_reqs, when we iterator the second instance, it will throw conflict exception when putting the allocations.
Another is as we always get the first item in alloc_reqs_ by_rp_uuid. get(selected_ host.uuid) , the selected_alloc_req is always stable, that will cause the values in selections_ to_return are same . In fact, it's not right for subsequent operations.
More details you can see: https:/ /etherpad. opendev. org/p/filter_ scheduler_ issue_with_ accelerators