Comment 7 for bug 1566217

Revision history for this message
Roman Podoliaka (rpodolyaka) wrote :

Timur, thanks for a detailed report.

The problem is that you forgot to pass `--on-shared-storage` argument to `host-evacuate`, it's required if instance ephemeral storage is shared (like in your case - it's stored in Ceph):

root@node-1:~# nova reset-state --active 8eb0ecb2-a7f4-429f-b72d-d3d0ca7c867a
nova help Reset state for server 8eb0ecb2-a7f4-429f-b72d-d3d0ca7c867a succeeded; new state is active
root@node-1:~# nova help host-evacuate
usage: nova host-evacuate [--target_host <target_host>] [--on-shared-storage]
                          <host>

Evacuate all instances from failed host.

Positional arguments:
  <host> Name of host.

Optional arguments:
  --target_host <target_host> Name of target host. If no host is specified
                               the scheduler will select a target.
  --on-shared-storage Specifies whether all instances files are on
                               shared storage
root@node-1:~# nova host-evacuate --on-shared-storage node-5.test.domain.local
+--------------------------------------+-------------------+---------------+
| Server UUID | Evacuate Accepted | Error Message |
+--------------------------------------+-------------------+---------------+
| 8eb0ecb2-a7f4-429f-b72d-d3d0ca7c867a | True | |
+--------------------------------------+-------------------+---------------+
root@node-1:~# nova list
+--------------------------------------+-------+---------+------------------+-------------+----------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-------+---------+------------------+-------------+----------------------------------+
| 8eb0ecb2-a7f4-429f-b72d-d3d0ca7c867a | test1 | REBUILD | rebuild_spawning | NOSTATE | admin_internal_net=192.168.111.3 |
+--------------------------------------+-------+---------+------------------+-------------+----------------------------------+
root@node-1:~# nova list
+--------------------------------------+-------+--------+------------+-------------+----------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-------+--------+------------+-------------+----------------------------------+
| 8eb0ecb2-a7f4-429f-b72d-d3d0ca7c867a | test1 | ACTIVE | - | Running | admin_internal_net=192.168.111.3 |
+--------------------------------------+-------+--------+------------+-------------+----------------------------------+
root@node-1:~# nova show test1
+--------------------------------------+-----------------------------------------------------------+
| Property | Value |
+--------------------------------------+-----------------------------------------------------------+
| OS-DCF:diskConfig | AUTO |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-SRV-ATTR:host | node-2.test.domain.local |
| OS-EXT-SRV-ATTR:hypervisor_hostname | node-2.test.domain.local |
| OS-EXT-SRV-ATTR:instance_name | instance-00000001 |
| OS-EXT-STS:power_state | 1 |
| OS-EXT-STS:task_state | - |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2016-05-19T13:48:26.000000 |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| admin_internal_net network | 192.168.111.3 |
| config_drive | |
| created | 2016-05-19T13:20:29Z |
| flavor | m1.small (2) |
| hostId | 0cdd76f0222dfadacbe8d20fdd62c8e0244ce4df7a6d76c37c32b299 |
| id | 8eb0ecb2-a7f4-429f-b72d-d3d0ca7c867a |
| image | Ubuntu cloud image (a5fa79ca-9097-4141-bc62-1feb6d2696d0) |
| key_name | - |
| metadata | {} |
| name | test1 |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| security_groups | default |
| status | ACTIVE |
| tenant_id | 87ec2a9db7674436bc6fee732a7d5b48 |
| updated | 2016-05-19T13:48:27Z |
| user_id | d879b7c5c2b54a7a886fb6ffc64a9b83 |
+--------------------------------------+-----------------------------------------------------------

if it's passed everything works correctly ^