> Issuing the command to migrate an instance from openstack-6 to another hosts (openstack-17) which does have enough resources
How are you migrating the server exactly? What command or REST API call are you making, with what parameters? Are you specifically targeting openstack-17 or just assuming the scheduler is going to select it?
Because from this log message, it looks like the scheduler is trying to resize to the same host on which the instance is already running:
Do you have allow_resize_to_same_host=True in nova.conf? Maybe that doesn't matter because it looks like you're not doing a resize, but a cold migration, because VCPU/MEMORY_MB/DISK_GB are not changing. Or maybe you're doing a live migration - because it seems you might be getting here:
My guess is this is a release from before Queens? Are you running Ocata or Pike or something older? Because I think you might be hitting a symptom of something that was fixed in Queens with this blueprint:
> Issuing the command to migrate an instance from openstack-6 to another hosts (openstack-17) which does have enough resources
How are you migrating the server exactly? What command or REST API call are you making, with what parameters? Are you specifically targeting openstack-17 or just assuming the scheduler is going to select it?
Because from this log message, it looks like the scheduler is trying to resize to the same host on which the instance is already running:
2019-03-18 12:56:49.112 301805 DEBUG nova.scheduler. client. report [req-786024d5- 2ba8-450c- 9809-bbafaf7c15 bd 7a5e20f2d1fc4af 18f959a4666c226 5c b07f32d8f1f84ba 7bbe821ee7fa4f0 9a - f750199c451f432 f9d615a147744f4 f5 f750199c451f432 f9d615a147744f4 f5] New allocation request containing both source and destination hosts in move operation: {'allocations': [{'resource_ provider' : {'uuid': u'4ce95dcf- 4c42-47cf- bd1e-48a0f4a5ec ec'}, 'resources': {u'VCPU': 4, u'MEMORY_MB': 2048, u'DISK_GB': 20}}, {'resource_ provider' : {'uuid': u'57990d7c- 7b10-40ee- 916f-324bf7784e ed'}, 'resources': {u'VCPU': 4, u'MEMORY_MB': 2048, u'DISK_GB': 20}}]} _move_operation _alloc_ request /usr/lib/ python2. 7/dist- packages/ nova/scheduler/ client/ report. py:202
Do you have allow_resize_ to_same_ host=True in nova.conf? Maybe that doesn't matter because it looks like you're not doing a resize, but a cold migration, because VCPU/MEMORY_ MB/DISK_ GB are not changing. Or maybe you're doing a live migration - because it seems you might be getting here:
https:/ /github. com/openstack/ nova/blob/ 18.0.0/ nova/scheduler/ utils.py# L536
My guess is this is a release from before Queens? Are you running Ocata or Pike or something older? Because I think you might be hitting a symptom of something that was fixed in Queens with this blueprint:
https:/ /specs. openstack. org/openstack/ nova-specs/ specs/queens/ implemented/ migration- allocations. html