Comment 0 for bug 886289

Revision history for this message
Chris Behrens (cbehrens) wrote : distributed_scheduler should set instance to error state

A recent change went in where compute/api will create instance DB entry directly if zone routing is off... instead of waiting for the scheduler to do it. So now, if the scheduler raises NoValidHost or errors out in some way... it needs to make sure to set the vm_state on the instance to ERROR.

Something like this is needed (Note this is untested... and this example is for distributed_scheduler. There's probably other places that this needs to happen... other schedulers, etc):

--- a/nova/scheduler/distributed_scheduler.py
+++ b/nova/scheduler/distributed_scheduler.py
@@ -36,6 +36,7 @@ from nova import log as logging
 from nova import rpc

 from nova.compute import api as compute_api
+from nova.compute import vm_states
 from nova.scheduler import api
 from nova.scheduler import driver
 from nova.scheduler import filters
@@ -99,6 +100,8 @@ class DistributedScheduler(driver.Scheduler):
                                         *args, **kwargs)

         if not weighted_hosts:
+ if request_spec.get('id'):
+ db.instance_update(context, request_spec['id'], {vm_state=vm_states.ERROR})
             raise driver.NoValidHost(_('No hosts were available'))