task_state is not restored on live-migration failure
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Compute (nova) |
Invalid
|
Undecided
|
Unassigned |
Bug Description
task_state is not restored on live-migration failure, users cannot try again.
On executing live migration, instance status changes active -> migrating -> active, like below.
a) initial status
+------
| ID | Name | Status | Networks |
+------
| 9e12a6d4-
+------
b) migrating
+------
| ID | Name | Status | Networks |
+------
| 9e12a6d4-
+------
c) after live migration is completed.
same as a)
Status changes ths same way as described above on any failure cases, but it doesnt when scheduler raises exception.
In this case, users cannot try live migration again because task_state == None is the prerequisite for instances to be migrated.
For detailed explanation, please look at nova/compute/
I am trying to explain what if scheduler_
(task_state is never rollbacked).
There are some cases when exceptions are raised. One is destination compute doesnot have enough disk,
one of others is destination compute has different cpu chipset... and so on.
> def live_migrate(self, context, instance, block_migration,
> disk_over_commit, host_name):
>
> instance = self.update(
> task_state=
> expected_
>
> self.scheduler_
> disk_over_commit, instance, host_name)
The patch is attached.
I didn't check <https:/ /review. openstack. org/#/c/ 19616/>.