virsh list shows instance running while no vms on host (after evacuation)

Bug #1594800 reported by Alexandra
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Mirantis OpenStack
Invalid
Undecided
MOS Nova

Bug Description

Detailed bug description:
virsh list on compute show instance running after successful instance evacuation

Snapshot: https://drive.google.com/a/mirantis.com/file/d/0BxmbtGZe1aPtZDZkMjVrM2E1aFE/view?usp=sharing

Steps to reproduce:
1) Boot vm from image on compute1
2) Kill compute1
3) Evacuate vm to another compute
4) Restore compute1
5) Check vm is on new compute, not on compute1
6) Check virsh list of compute1

Expected results:
1-5) Ok
6) Ok,
virsh list on new compute shows instance
virsh list on compute1 doesn't contain instance

Actual result:
1-5) Ok
6) Nok, virsh list shows instance running
root@node-3:~# virsh list
 Id Name State
----------------------------------------------------
 2 instance-00000003 running
Actually the same instance-00000003 is running on the new compute

Reproducibility:
always
Workaround:
-
Impact:
Automated test is failed

Description of the environment:
#9.0 mos 507
1 controller + 3 computes&ceph
Ceph RBD is enabled for all (including Ceph RBD for ephemeral volumes (Nova))

root@node-1:~# nova show instance
+--------------------------------------+----------------------------------------------------------+
| Property | Value |
+--------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig | AUTO |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-SRV-ATTR:host | node-4.test.domain.local |
| OS-EXT-SRV-ATTR:hostname | instance |
| OS-EXT-SRV-ATTR:hypervisor_hostname | node-4.test.domain.local |
| OS-EXT-SRV-ATTR:instance_name | instance-00000001 |
| OS-EXT-SRV-ATTR:kernel_id | |
| OS-EXT-SRV-ATTR:launch_index | 0 |
| OS-EXT-SRV-ATTR:ramdisk_id | |
| OS-EXT-SRV-ATTR:reservation_id | r-n9fune6s |
| OS-EXT-SRV-ATTR:root_device_name | /dev/vda |
| OS-EXT-SRV-ATTR:user_data | - |
| OS-EXT-STS:power_state | 1 |
| OS-EXT-STS:task_state | - |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2016-06-21T09:52:35.000000 |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| admin_internal_net network | 10.109.11.3 |
| config_drive | True |
| created | 2016-06-21T09:42:10Z |
| description | instance |
| flavor | m1.small (2) |
| hostId | ad62d273a5a0b7ef16324018594242880f5b4bfee9121815328b8aa2 |
| host_status | UP |
| id | 6a447863-d889-4222-a86d-97548c1b4aae |
| image | TestVM (7c011039-fc72-46f9-9302-7fd083db040a) |
| key_name | - |
| locked | False |
| metadata | {} |
| name | instance |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| security_groups | default |
| status | ACTIVE |
| tenant_id | 641b13d4cb0d40dfac346bab86e012f5 |
| updated | 2016-06-21T09:52:36Z |
| user_id | efb05daab6d54c46846647e5e820c9f6 |
+--------------------------------------+----------------------------------------------------------+

root@node-4:~# virsh list --all
 Id Name State
----------------------------------------------------
 2 instance-00000001 running

root@node-2:~# virsh list --all
 Id Name State
----------------------------------------------------
 2 instance-00000001 running

Tags: area-nova
Changed in mos:
milestone: none → 9.0
assignee: nobody → MOS Nova (mos-nova)
tags: added: area-nova
Revision history for this message
Timofey Durakov (tdurakov) wrote :

Waiting for repro from QA before confirmation

Revision history for this message
Kristina Berezovskaia (kkuznetsova) wrote :

Manually checked on RC2 (495 iso) with ipmi console
Evacuation works correctly.
After studying test code in mos-integration, we have found that suspend method is used instead of destroy.The suspend method only paused vm and it is the reason of vm on 2 computes
Chance status to invalid

Changed in mos:
status: New → Invalid
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.