FYI we're also hitting this on trusty/mitaka for what looks
like incompletely deleted instances:
* still running at hypervisor, ie
virsh dominfo UUID # shows it ok
* deleted both at nova 'instances' and 'block_device_mapping' tables.
Once certain it's still running at hypervisor,
our workaround is to revive the instance at nova DB
with something like:
mysql> begin work;
mysql> update instances
set vm_state='active', deleted=0, deleted_at=NULL
where uuid='<UUID>';
mysql> update block_device_mapping
set deleted=0, deleted_at=NULL
where instance_uuid='<UUID>';
mysql> commit work;
Note also it has happened to us from failed migrations
(ie instance shown at the 'wrong' host at nova DB),
we've fixed those by adding to the 1st SQL
FYI we're also hitting this on trusty/mitaka for what looks
like incompletely deleted instances:
* still running at hypervisor, ie
virsh dominfo UUID # shows it ok
* deleted both at nova 'instances' and 'block_ device_ mapping' tables.
Once certain it's still running at hypervisor,
our workaround is to revive the instance at nova DB
with something like:
mysql> begin work; mapping uuid='< UUID>';
mysql> update instances
set vm_state='active', deleted=0, deleted_at=NULL
where uuid='<UUID>';
mysql> update block_device_
set deleted=0, deleted_at=NULL
where instance_
mysql> commit work;
Note also it has happened to us from failed migrations
(ie instance shown at the 'wrong' host at nova DB),
we've fixed those by adding to the 1st SQL
host=' <service_ hostname> ', node='< hypervisor_ hostname> ',
with above hostname-s as: hostname> from nova hypervisor-list
- <service_hostname> from nova service-list
- <hypervisor_