Issuing the command to migrate an instance from openstack-6 to another hosts (openstack-17) which does have enough resources
# logging in nova-compute.log
2019-03-18 12:56:49.111 301805 DEBUG nova.scheduler.client.report [req-786024d5-2ba8-450c-9809-bbafaf7c15bd 7a5e20f2d1fc4af18f959a4666c2265c b07f32d8f1f84ba7bbe821ee7fa4f09a - f750199c451f432f9d615a147744f4f5 f750199c451f432f9d615a147744f4f5] Doubling-up allocation request for move operation. _move_operation_alloc_request /usr/lib/python2.7/dist-packages/nova/scheduler/client/report.py:162
2019-03-18 12:56:49.112 301805 DEBUG nova.scheduler.client.report [req-786024d5-2ba8-450c-9809-bbafaf7c15bd 7a5e20f2d1fc4af18f959a4666c2265c b07f32d8f1f84ba7bbe821ee7fa4f09a - f750199c451f432f9d615a147744f4f5 f750199c451f432f9d615a147744f4f5] New allocation request containing both source and destination hosts in move operation: {'allocations': [{'resource_provider': {'uuid': u'4ce95dcf-4c42-47cf-bd1e-48a0f4a5ecec'}, 'resources': {u'VCPU': 4, u'MEMORY_MB': 2048, u'DISK_GB': 20}}, {'resource_provider': {'uuid': u'57990d7c-7b10-40ee-916f-324bf7784eed'}, 'resources': {u'VCPU': 4, u'MEMORY_MB': 2048, u'DISK_GB': 20}}]} _move_operation_alloc_request /usr/lib/python2.7/dist-packages/nova/scheduler/client/report.py:202
2019-03-18 12:56:49.146 301805 WARNING nova.scheduler.client.report [req-786024d5-2ba8-450c-9809-bbafaf7c15bd 7a5e20f2d1fc4af18f959a4666c2265c b07f32d8f1f84ba7bbe821ee7fa4f09a - f750199c451f432f9d615a147744f4f5 f750199c451f432f9d615a147744f4f5] Unable to submit allocation for instance 71111e00-7913-4de9-8f45-ce13fcb8a104 (409 <html>
<head>
<title>409 Conflict</title>
</head>
<body>
<h1>409 Conflict</h1>
There was a conflict when trying to complete your request.<br /><br />
Unable to allocate inventory: Unable to create allocation for 'MEMORY_MB' on resource provider '4ce95dcf-4c42-47cf-bd1e-48a0f4a5ecec'. The requested amount would exceed the capacity.
</body>
</html>)
2019-03-18 12:56:49.147 301805 WARNING nova.scheduler.utils [req-786024d5-2ba8-450c-9809-bbafaf7c15bd 7a5e20f2d1fc4af18f959a4666c2265c b07f32d8f1f84ba7bbe821ee7fa4f09a - f750199c451f432f9d615a147744f4f5 f750199c451f432f9d615a147744f4f5] Failed to compute_task_migrate_server: No valid host was found. Unable to move instance 71111e00-7913-4de9-8f45-ce13fcb8a104 to host openstack-17. There is not enough capacity on the host for the instance.: NoValidHost: No valid host was found. Unable to move instance 71111e00-7913-4de9-8f45-ce13fcb8a104 to host openstack-17. There is not enough capacity on the host for the instance.
2019-03-18 12:56:49.148 301805 WARNING nova.scheduler.utils [req-786024d5-2ba8-450c-9809-bbafaf7c15bd 7a5e20f2d1fc4af18f959a4666c2265c b07f32d8f1f84ba7bbe821ee7fa4f09a - f750199c451f432f9d615a147744f4f5 f750199c451f432f9d615a147744f4f5] [instance: 71111e00-7913-4de9-8f45-ce13fcb8a104] Setting instance to ACTIVE state.: NoValidHost: No valid host was found. Unable to move instance 71111e00-7913-4de9-8f45-ce13fcb8a104 to host openstack-17. There is not enough capacity on the host for the instance.
When searching who resource provider '4ce95dcf-4c42-47cf-bd1e-48a0f4a5ecec' is, used the nova_api database
select * from resource_providers where uuid='4ce95dcf-4c42-47cf-bd1e-48a0f4a5ecec';
+---------------------+---------------------+----+--------------------------------------+------------------+------------+----------+
| created_at | updated_at | id | uuid | name | generation | can_host |
+---------------------+---------------------+----+--------------------------------------+------------------+------------+----------+
| 2018-05-09 11:00:01 | 2019-03-14 10:47:55 | 39 | 4ce95dcf-4c42-47cf-bd1e-48a0f4a5ecec | openstack-6.maas | 171 | NULL |
+---------------------+---------------------+----+--------------------------------------+------------------+------------+----------+
1 row in set (0.00 sec)
So that is openstack-6 and nog 17 as mentioned in the above logging. From the logging provided this is not clear, also there does not seem to be an command to retrieve the resource-provider, based on the uuid and that is the only thing logged.
While the bug you are reporting may have merits, I'm failing to see how this applies to the charm itself.
Should this bug have a task targeted at the upstream ``Nova`` project?