> Observe even after the instance has been destroyed, there is still 1 set of allocations for the instance.
To be clear, in queens when doing a cold migrate, we'll have a set of allocations in placement for the source and destination compute node providers. The source node allocations are tracked against the migration record, and the destination node allocations are tracked against the instance.
If resize_instance fails and we don't rollback allocations, when you delete the instance, it should remove it's allocations from the destination node:
However, the source node resource provider would still have allocations against it in placement tracked via the migration record, and those don't get removed if we failed to rollback when resize failed. When we delete the instance, we also delete the migration records associated with it:
> Observe even after the instance has been destroyed, there is still 1 set of allocations for the instance.
To be clear, in queens when doing a cold migrate, we'll have a set of allocations in placement for the source and destination compute node providers. The source node allocations are tracked against the migration record, and the destination node allocations are tracked against the instance.
If resize_instance fails and we don't rollback allocations, when you delete the instance, it should remove it's allocations from the destination node:
https:/ /github. com/openstack/ nova/blob/ fba4161f71e47e4 155e7f9a1f0d3a4 1b8107cef5/ nova/compute/ manager. py#L751
https:/ /github. com/openstack/ nova/blob/ fba4161f71e47e4 155e7f9a1f0d3a4 1b8107cef5/ nova/scheduler/ client/ report. py#L1628
However, the source node resource provider would still have allocations against it in placement tracked via the migration record, and those don't get removed if we failed to rollback when resize failed. When we delete the instance, we also delete the migration records associated with it:
https:/ /github. com/openstack/ nova/blob/ fba4161f71e47e4 155e7f9a1f0d3a4 1b8107cef5/ nova/db/ sqlalchemy/ api.py# L1850