Comment 1 for bug 1707071

Revision history for this message
Matt Riedemann (mriedem) wrote :

I pushed up a sanity check patch:

https://review.openstack.org/#/c/488187/2

for this and got a hit in the live migration job:

http://logs.openstack.org/87/488187/2/check/gate-tempest-dsvm-multinode-live-migration-ubuntu-xenial/107a810/logs/subnode-2/screen-n-cpu.txt.gz#_Jul_27_22_21_14_766229

Jul 27 22:21:14.766229 ubuntu-xenial-2-node-rax-iad-10131407-754260 nova-compute[921]: WARNING nova.scheduler.client.report [None req-e1963d3c-9cb0-4980-a397-7314c7f483fa tempest-LiveMigrationRemoteConsolesV26Test-535663336 tempest-LiveMigrationRemoteConsolesV26Test-535663336] [instance: 6a8ff75c-22e3-4a17-b0d3-ac1b44f9f7c3] Removing allocations for instance which are currently against more than one compute node resource provider. Current allocations: {u'13b1e5e0-66ef-4533-9a07-b1a3220d6b00': {u'generation': 8, u'resources': {u'VCPU': 1, u'MEMORY_MB': 64}}, u'7aa9619d-db83-4da9-b822-f4d66e7143f8': {u'generation': 6, u'resources': {u'VCPU': 1, u'MEMORY_MB': 64}}}

I have not seen a hit on the case that the source node is deleting allocations for an instance but the source node UUID is not part of the current allocations, which is a race that can exist in a slow system, but probably not slow enough for the upstream CI system which doesn't have that much traffic.