Even with waiting for 'active' state of the instance, the problem still reproduces. See logs on https://review.openstack.org/#/c/410338/6. Look at c-vol trace fragment. Here req-b895f42e-870c-4d30-a98a-ce0efaf837e7 correponds to delete volume operation (called via instance termination), and req-c615ee35-7e0c-4490-8e57-1339bcc8a970 correponds to delete snapshot operation (called via image deletion during test cleanup). --- volume deletion started --- 2016-12-14 11:40:20.931 DEBUG cinder.volume.drivers.rbd [req-b895f42e-870c-4d30-a98a-ce0efaf837e7 tempest-TestVolumeBootPattern-1496098983] opening connection to ceph cluster (timeout=-1). _connect_to_rados /opt/stack/new/cinder/cinder/volume/drivers/rbd.py:220 2016-12-14 11:40:26.194 DEBUG cinder.volume.drivers.rbd [req-b895f42e-870c-4d30-a98a-ce0efaf837e7 tempest-TestVolumeBootPattern-1496098983] volume has no backup snaps _delete_backup_snaps /opt/stack/new/cinder/cinder/volume/drivers/rbd.py:528 2016-12-14 11:40:26.267 DEBUG cinder.volume.drivers.rbd [req-b895f42e-870c-4d30-a98a-ce0efaf837e7 tempest-TestVolumeBootPattern-1496098983] deleting rbd volume volume-e0e726f4-f7a2-4e89-8de5-307f02639a18 delete_volume /opt/stack/new/cinder/cinder/volume/drivers/rbd.py:649 --- snapshot deletion started --- 2016-12-14 11:40:26.275 DEBUG cinder.volume.drivers.rbd [req-c615ee35-7e0c-4490-8e57-1339bcc8a970 tempest-TestVolumeBootPattern-1496098983] opening connection to ceph cluster (timeout=-1). _connect_to_rados /opt/stack/new/cinder/cinder/volume/drivers/rbd.py:220 2016-12-14 11:40:26.382 INFO cinder.volume.drivers.rbd [req-c615ee35-7e0c-4490-8e57-1339bcc8a970 tempest-TestVolumeBootPattern-1496098983] Image volumes/volume-e0e726f4-f7a2-4e89-8de5-307f02639a18 is dependent on the snapshot snapshot-af5114b1-1096-45df-b007-fec7558b7779. 2016-12-14 11:40:31.177 ERROR cinder.volume.manager [req-c615ee35-7e0c-4490-8e57-1339bcc8a970 tempest-TestVolumeBootPattern-1496098983] [snapshot-af5114b1-1096-45df-b007-fec7558b7779] Delete snapshot failed, due to snapshot busy. --- ^^^ snapshot deletion failed --- --- volume deletion continued --- 2016-12-14 11:40:31.772 DEBUG cinder.quota [req-b895f42e-870c-4d30-a98a-ce0efaf837e7 tempest-TestVolumeBootPattern-1496098983] Created reservations ['cc95a20b-d1a3-4663-9776-32196739384b', '781df0f0-a697-40ff-88c1-27c33a66b3d1', '05042d6a-15dc-42c7-b3d4-cd7004b7cac7', 'e7616d2e-1e5a-41a5-b796-fe964ea1c214'] reserve /opt/stack/new/cinder/cinder/quota.py:1025 2016-12-14 11:40:31.791 DEBUG cinder.volume.utils [req-b895f42e-870c-4d30-a98a-ce0efaf837e7 tempest-TestVolumeBootPattern-1496098983] Can not find volume e0e726f4-f7a2-4e89-8de5-307f02639a18 at notify usage _usage_from_volume /opt/stack/new/cinder/cinder/volume/utils.py:96 2016-12-14 11:40:31.806 DEBUG cinder.volume.drivers.rbd [req-b895f42e-870c-4d30-a98a-ce0efaf837e7 tempest-TestVolumeBootPattern-1496098983] opening connection to ceph cluster (timeout=-1). _connect_to_rados /opt/stack/new/cinder/cinder/volume/drivers/rbd.py:220 2016-12-14 11:40:31.834 DEBUG cinder.volume.drivers.rbd [req-b895f42e-870c-4d30-a98a-ce0efaf837e7 tempest-TestVolumeBootPattern-1496098983] opening connection to ceph cluster (timeout=-1). _connect_to_rados /opt/stack/new/cinder/cinder/volume/drivers/rbd.py:220 2016-12-14 11:40:31.866 DEBUG cinder.volume.drivers.rbd [req-b895f42e-870c-4d30-a98a-ce0efaf837e7 tempest-TestVolumeBootPattern-1496098983] opening connection to ceph cluster (timeout=-1). _connect_to_rados /opt/stack/new/cinder/cinder/volume/drivers/rbd.py:220 2016-12-14 11:40:37.132 DEBUG cinder.volume.drivers.rbd [req-b895f42e-870c-4d30-a98a-ce0efaf837e7 tempest-TestVolumeBootPattern-1496098983] opening connection to ceph cluster (timeout=-1). _connect_to_rados /opt/stack/new/cinder/cinder/volume/drivers/rbd.py:220 2016-12-14 11:40:37.283 DEBUG cinder.volume.drivers.rbd [req-b895f42e-870c-4d30-a98a-ce0efaf837e7 tempest-TestVolumeBootPattern-1496098983] opening connection to ceph cluster (timeout=-1). _connect_to_rados /opt/stack/new/cinder/cinder/volume/drivers/rbd.py:220 2016-12-14 11:40:37.373 DEBUG cinder.volume.drivers.rbd [req-b895f42e-870c-4d30-a98a-ce0efaf837e7 tempest-TestVolumeBootPattern-1496098983] opening connection to ceph cluster (timeout=-1). _connect_to_rados /opt/stack/new/cinder/cinder/volume/drivers/rbd.py:220 2016-12-14 11:40:37.565 DEBUG cinder.manager [req-b895f42e-870c-4d30-a98a-ce0efaf837e7 tempest-TestVolumeBootPattern-1496098983] Notifying Schedulers of capabilities ... _publish_service_capabilities /opt/stack/new/cinder/cinder/manager.py:175 2016-12-14 11:40:37.566 DEBUG oslo_messaging._drivers.amqpdriver [req-b895f42e-870c-4d30-a98a-ce0efaf837e7 tempest-TestVolumeBootPattern-1496098983] CAST unique_id: 66c1896ea93846e392f02b6b7b0e7ad2 FANOUT topic 'cinder-scheduler' _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:431 2016-12-14 11:40:37.567 3244 DEBUG oslo_messaging._drivers.amqpdriver [-] received message msg_id: 4aa5d1fd63814219a04616049abdbcae reply to reply_611c80901b90436ea64b638bdde71b34 __call__ /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:194 2016-12-14 11:40:37.596 DEBUG oslo_messaging._drivers.amqpdriver [req-b895f42e-870c-4d30-a98a-ce0efaf837e7 tempest-TestVolumeBootPattern-1496098983] CAST unique_id: 938160ea81f24ed8964076a0205d5e30 exchange 'openstack' topic 'cinder-scheduler' _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:442 2016-12-14 11:40:37.609 INFO cinder.volume.manager [req-b895f42e-870c-4d30-a98a-ce0efaf837e7 tempest-TestVolumeBootPattern-1496098983] [volume-e0e726f4-f7a2-4e89-8de5-307f02639a18] Deleted volume successfully. --- volume deletion finished successfully ---