Volume reserved after instance deleted

Bug #1838385 reported by Eric Xie
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Expired
Undecided
Unassigned

Bug Description

Description
===========
I created one instance boot from volume.
After deleting the instance when it's building,
the volume's status was reserved.

Steps to reproduce
==================
* I created one instance with `openstack server create --volume vol-for-vm1-50G --flavor ecs_2C4G50G_general --network xtt-net-1 vm1`
* then I deleted the instance when it's status was build
| 3aa10ae1-2f3f-4c52-8f27-a557cf82de9e | vm1 | BUILD | None | NOSTATE | | | | ecs_2C4G50G_general | 5af94a8a-aab9-4ba5-bf52-ddb815218e61 | cn-north-3a | None | |
* then I showed the status of volume

Expected result
===============
The volume's status is available.

Actual result
=============
The volume's status is reserved.

Environment
===========
1. Exact version of OpenStack you are running. See the following
  list for all releases: http://docs.openstack.org/releases/
  # apt list --installed | grep nova

WARNING: apt does not have a stable CLI interface. Use with caution in scripts.

nova-api/xenial,xenial,now 2:17.0.7-6~u16.01 all [installed]
nova-common/xenial,xenial,now 2:17.0.7-6~u16.01 all [installed]
nova-conductor/xenial,xenial,now 2:17.0.7-6~u16.01 all [installed]
nova-consoleauth/xenial,xenial,now 2:17.0.7-6~u16.01 all [installed]
nova-consoleproxy/xenial,xenial,now 2:17.0.7-6~u16.01 all [installed]
nova-doc/xenial,xenial,now 2:17.0.7-6~u16.01 all [installed]
nova-placement-api/xenial,xenial,now 2:17.0.7-6~u16.01 all [installed]
nova-scheduler/xenial,xenial,now 2:17.0.7-6~u16.01 all [installed]
python-nova/xenial,xenial,now 2:17.0.7-6~u16.01 all [installed,automatic]
python-novaclient/xenial,xenial,now 2:9.1.1-1~u16.04+mcp6 all [installed,automatic]

# apt list --installed | grep cinder

WARNING: apt does not have a stable CLI interface. Use with caution in scripts.

cinder-backup/xenial,xenial,now 2:12.0.4-2~u16.04 all [installed]
cinder-common/xenial,xenial,now 2:12.0.4-2~u16.04 all [installed,automatic]
cinder-scheduler/xenial,xenial,now 2:12.0.4-2~u16.04 all [installed]
cinder-volume/xenial,xenial,now 2:12.0.4-2~u16.04 all [installed]
python-cinder/xenial,xenial,now 2:12.0.4-2~u16.04 all [installed,automatic]
python-cinderclient/xenial,xenial,now 1:3.5.0-1.0~u16.04+mcp5 all [installed,automatic]

Revision history for this message
Matt Riedemann (mriedem) wrote :

Do you see anything related in the logs like failing to unreserve the volume? If the server is not yet on a host the API will try to perform a "local delete" which should detach the volume:

https://github.com/openstack/nova/blob/17.0.7/nova/compute/api.py#L2085

Are you seeing warnings around that like it failed? Are there errors on the cinder side?

Revision history for this message
Matt Riedemann (mriedem) wrote :

Since this would be a race it's going to be hard to tell what happened without some investigation/logging to determine where it fails.

Changed in nova:
status: New → Incomplete
Revision history for this message
Matt Riedemann (mriedem) wrote :

I've marked this as Incomplete. If you can provide some logs with the volume ID for tracing then this could be debugged further.

Revision history for this message
Launchpad Janitor (janitor) wrote :

[Expired for OpenStack Compute (nova) because there has been no activity for 60 days.]

Changed in nova:
status: Incomplete → Expired
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.