nova force-delete does not delete "BUILD" state instances
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Compute (nova) |
Fix Released
|
Medium
|
Christopher Yeoh |
Bug Description
Description of problem:
Using nova force-delete $instance-id fails when an instance is in status "BUILD" and OS-EXT-
Version-Release number of selected component (if applicable):
2013.2 (Havana)
How reproducible:
Steps to Reproduce:
1. find a seemingly hung instance
2. fire off nova-delete
3. watch it complain
Actual results:
[root@host02 ~(keystone_admin)]$ nova force-delete 3a83b712-
ERROR: Cannot 'forceDelete' while instance is in vm_state building (HTTP 409) (Request-ID: req-22737c83-
Expected results:
do the needful.
Additional info:
Here are some logs obtained from this behavior, this is on RHOS4 / RHEL6.5:
--snip--
[root@host02 ~(keystone_admin)]$ nova force-delete 3a83b712-
ERROR: Cannot 'forceDelete' while instance is in vm_state building (HTTP 409) (Request-ID: req-22737c83-
[root@host02 ~(keystone_admin)]$ nova list --all-tenants | grep 3a83b712-
| 3a83b712-
[root@host02 ~(keystone_admin)]$ nova show 3a83b712-
+------
| Property | Value |
+------
| status | BUILD |
| updated | 2014-04-
| OS-EXT-
| OS-EXT-
| foreman_ext network | 192.168.201.6 |
| key_name | foreman-ci |
| image | rhel-guest-
| hostId | 4b98ba395063916
| OS-EXT-STS:vm_state | building |
| OS-EXT-
| foreman_int network | 192.168.200.6 |
| OS-SRV-
| OS-EXT-
| flavor | m1.large (4) |
| id | 3a83b712-
| security_groups | [{u'name': u'default'}, {u'name': u'default'}, {u'name': u'default'}] |
| OS-SRV-
| user_id | 13090770bacc46c
| name | bcrochet-foreman |
| created | 2014-04-
| tenant_id | f8e6ba11caa94ea
| OS-DCF:diskConfig | MANUAL |
| metadata | {} |
| os-extended-
| accessIPv4 | |
| accessIPv6 | |
| progress | 0 |
| OS-EXT-
| OS-EXT-
| default network | 192.168.87.7 |
| config_drive | |
+------
[root@host08 ~]# virsh list --all | grep instance-000099e9
[root@host08 ~]#
[root@host02 ~(keystone_admin)]$ nova delete 3a83b712-
[root@host02 ~(keystone_admin)]$ nova list --all-tenants | grep 3a83b712-
| 3a83b712-
** insert several more "nova delete" calls **
( back on compute host08 after API called for delete..)
2014-04-22 12:46:21.253 8427 INFO nova.virt.
2014-04-22 12:46:21.668 8427 INFO nova.compute.
2014-04-22 12:46:21.701 8427 INFO nova.virt.
2014-04-22 12:46:21.702 8427 INFO nova.virt.
-- snip --
Of relative importance, this particular instance did not show up in virsh but I could see it in the qemu-kvm process table:
( tracking instance down to host08 )
[root@host02 ~(keystone_admin)]$ cat /var/log/
2014-04-16 20:27:52.243 28987 INFO nova.scheduler.
2014-04-16 20:27:52.462 28987 INFO nova.scheduler.
( see successful build on host08 nova compute)
[root@host08 ~]# cat /var/log/
2014-04-16 20:27:52.708 8427 AUDIT nova.compute.
2014-04-16 20:27:53.836 8427 AUDIT nova.compute.claims [req-5e3af99e-
2014-04-16 20:27:53.837 8427 AUDIT nova.compute.claims [req-5e3af99e-
2014-04-16 20:27:53.837 8427 AUDIT nova.compute.claims [req-5e3af99e-
2014-04-16 20:27:53.838 8427 AUDIT nova.compute.claims [req-5e3af99e-
2014-04-16 20:27:53.838 8427 AUDIT nova.compute.claims [req-5e3af99e-
2014-04-16 20:27:53.839 8427 AUDIT nova.compute.claims [req-5e3af99e-
2014-04-16 20:27:53.840 8427 AUDIT nova.compute.claims [req-5e3af99e-
2014-04-16 20:27:53.840 8427 AUDIT nova.compute.claims [req-5e3af99e-
2014-04-16 20:27:54.812 8427 INFO nova.virt.
2014-04-16 20:27:58.683 8427 INFO nova.virt.
( shows up in process table )
[root@host08 ~(keystone_admin)]$ ps -ef | grep instance | grep 3a83b712-
nova 22510 8427 0 Apr16 ? 00:04:30 /usr/libexec/
( but virsh does not know about it )
[root@host08 ~]# virsh list --all | grep instance-000099e9
[root@host08 ~]#
[root@host08 ~]# for x in `virsh list --all | grep -v Name | awk '{print $2}'`; do virsh dumpxml $x | grep 3a83b712-
[root@host08 ~]#
Changed in nova: | |
importance: | Undecided → Medium |
Changed in nova: | |
assignee: | Christopher Yeoh (cyeoh-0) → Alex Xu (xuhj) |
Changed in nova: | |
milestone: | none → juno-rc1 |
assignee: | Alex Xu (xuhj) → Christopher Yeoh (cyeoh-0) |
Changed in nova: | |
status: | Fix Committed → Fix Released |
Changed in nova: | |
milestone: | juno-rc1 → 2014.2 |
So perhaps this is a documentation/ nomenclature issue
nova delete is for deleting a server
nova force-delete is for deleting an server which has previously been deleted but not yet reclaimed