Just tried another case with shutdown=preserve and found I couldn't delete the volume from cinder, as you described. So far, it looks like a problem in cinder? The volume showed as "available" and not attached to anything in cinder, then the "cinder delete" failed to complete. $ nova boot --flavor m1.nano --block-device source=image,id=59f7eeb3-700a-456f-8d3f-9dfc4cce797b,dest=volume,size=1,shutdown=preserve,bootindex=0 --poll hi +--------------------------------------+-------------------------------------------------+ | Property | Value | +--------------------------------------+-------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | | | OS-EXT-SRV-ATTR:host | - | | OS-EXT-SRV-ATTR:hostname | hi | | OS-EXT-SRV-ATTR:hypervisor_hostname | - | | OS-EXT-SRV-ATTR:instance_name | | | OS-EXT-SRV-ATTR:kernel_id | | | OS-EXT-SRV-ATTR:launch_index | 0 | | OS-EXT-SRV-ATTR:ramdisk_id | | | OS-EXT-SRV-ATTR:reservation_id | r-1j6d6cne | | OS-EXT-SRV-ATTR:root_device_name | - | | OS-EXT-SRV-ATTR:user_data | - | | OS-EXT-STS:power_state | 0 | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | - | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | adminPass | M6onRYNPbEJ4 | | config_drive | | | created | 2017-03-15T04:37:29Z | | description | - | | flavor | m1.nano (42) | | hostId | | | host_status | | | id | 48c9f377-52d4-46b9-bd6b-d9de2d0dc540 | | image | Attempt to boot from volume - no image supplied | | key_name | - | | locked | False | | metadata | {} | | name | hi | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | security_groups | default | | status | BUILD | | tags | [] | | tenant_id | 1dc696f01a67429a88a20273b5e52e10 | | updated | 2017-03-15T04:37:29Z | | user_id | 488ae3ccfbba4655b201fa7c8fbb2686 | +--------------------------------------+-------------------------------------------------+ Server building... 100% complete Finished $ nova list +--------------------------------------+------+--------+------------+-------------+--------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+------+--------+------------+-------------+--------------------------------+ | 48c9f377-52d4-46b9-bd6b-d9de2d0dc540 | hi | ACTIVE | - | Running | public=2001:db8::d, 172.24.4.2 | +--------------------------------------+------+--------+------------+-------------+--------------------------------+ $ cinder list +--------------------------------------+--------+------+------+-------------+----------+--------------------------------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+--------+------+------+-------------+----------+--------------------------------------+ | 08e57275-e4d6-4e1e-a781-c5f3d27d06b3 | in-use | | 1 | ceph | true | 48c9f377-52d4-46b9-bd6b-d9de2d0dc540 | +--------------------------------------+--------+------+------+-------------+----------+--------------------------------------+ $ nova service-list +----+------------------+---------------+----------+---------+-------+----------------------------+-----------------+ | Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +----+------------------+---------------+----------+---------+-------+----------------------------+-----------------+ | 7 | nova-conductor | ubuntu-xenial | internal | enabled | up | 2017-03-15T04:40:55.000000 | - | | 10 | nova-scheduler | ubuntu-xenial | internal | enabled | up | 2017-03-15T04:40:54.000000 | - | | 11 | nova-consoleauth | ubuntu-xenial | internal | enabled | up | 2017-03-15T04:40:50.000000 | - | | 12 | nova-compute | ubuntu-xenial | nova | enabled | down | 2017-03-15T04:38:08.000000 | - | +----+------------------+---------------+----------+---------+-------+----------------------------+-----------------+ $ nova delete hi Request to delete server hi has been accepted. $ nova list +----+------+--------+------------+-------------+----------+ | ID | Name | Status | Task State | Power State | Networks | +----+------+--------+------------+-------------+----------+ +----+------+--------+------------+-------------+----------+ $ cinder list +--------------------------------------+-----------+------+------+-------------+----------+-------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+------+------+-------------+----------+-------------+ | 08e57275-e4d6-4e1e-a781-c5f3d27d06b3 | available | | 1 | ceph | true | | +--------------------------------------+-----------+------+------+-------------+----------+-------------+ $ cinder delete 08e57275-e4d6-4e1e-a781-c5f3d27d06b3 Request to delete volume 08e57275-e4d6-4e1e-a781-c5f3d27d06b3 has been accepted. $ cinder list +--------------------------------------+----------+------+------+-------------+----------+-------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+----------+------+------+-------------+----------+-------------+ | 08e57275-e4d6-4e1e-a781-c5f3d27d06b3 | deleting | | 1 | ceph | true | | +--------------------------------------+----------+------+------+-------------+----------+-------------+ with a trace in c-vol.log: 2017-03-15 04:41:31.481 DEBUG cinder.volume.drivers.rbd [req-17ce097b-c92d-43a2-99c2-f9ac05e21ad6 admin None] deleting rbd volume volume-08e57275-e4d6-4e1e-a781-c5f3d27d06b3 from (pid=3285) delete_volume /opt/stack/cinder/cinder/volume/drivers/rbd.py:781 2017-03-15 04:41:31.514 DEBUG cinder.utils [req-17ce097b-c92d-43a2-99c2-f9ac05e21ad6 admin None] Failed attempt 1 from (pid=3285) _print_stop /opt/stack/cinder/cinder/utils.py:780 2017-03-15 04:41:31.515 DEBUG cinder.utils [req-17ce097b-c92d-43a2-99c2-f9ac05e21ad6 admin None] Have been at this for 0.032 seconds from (pid=3285) _print_stop /opt/stack/cinder/cinder/utils.py:782 2017-03-15 04:41:31.516 DEBUG cinder.utils [req-17ce097b-c92d-43a2-99c2-f9ac05e21ad6 admin None] Sleeping for 10.0 seconds from (pid=3285) _backoff_sleep /opt/stack/cinder/cinder/utils.py:774 2017-03-15 04:41:41.553 DEBUG cinder.utils [req-17ce097b-c92d-43a2-99c2-f9ac05e21ad6 admin None] Failed attempt 2 from (pid=3285) _print_stop /opt/stack/cinder/cinder/utils.py:780 2017-03-15 04:41:41.554 DEBUG cinder.utils [req-17ce097b-c92d-43a2-99c2-f9ac05e21ad6 admin None] Have been at this for 10.072 seconds from (pid=3285) _print_stop /opt/stack/cinder/cinder/utils.py:782 2017-03-15 04:41:41.555 DEBUG cinder.utils [req-17ce097b-c92d-43a2-99c2-f9ac05e21ad6 admin None] Sleeping for 20.0 seconds from (pid=3285) _backoff_sleep /opt/stack/cinder/cinder/utils.py:774 2017-03-15 04:42:01.593 DEBUG cinder.utils [req-17ce097b-c92d-43a2-99c2-f9ac05e21ad6 admin None] Failed attempt 3 from (pid=3285) _print_stop /opt/stack/cinder/cinder/utils.py:780 2017-03-15 04:42:01.594 DEBUG cinder.utils [req-17ce097b-c92d-43a2-99c2-f9ac05e21ad6 admin None] Have been at this for 30.112 seconds from (pid=3285) _print_stop /opt/stack/cinder/cinder/utils.py:782 2017-03-15 04:42:01.595 WARNING cinder.volume.drivers.rbd [req-17ce097b-c92d-43a2-99c2-f9ac05e21ad6 admin None] ImageBusy error raised while deleting rbd volume. This may have been caused by a connection from a client that has crashed and, if so, may be resolved by retrying the delete after 30 seconds has elapsed. 2017-03-15 04:42:01.600 ERROR cinder.volume.manager [req-17ce097b-c92d-43a2-99c2-f9ac05e21ad6 admin None] Unable to delete busy volume.