I think changing the cinder API to just allow a volume to be deleted, without a force flag, when it's 'in-use' is a dangerous and wrong to do. We'll get users who do it by mistake and then nuke their volumes attached to live VMs.
Or check the exception that comes back from the volume_api.delete call and see if it complains because it's attached, then call terminate_connection, then delete.
I think changing the cinder API to just allow a volume to be deleted, without a force flag, when it's 'in-use' is a dangerous and wrong to do. We'll get users who do it by mistake and then nuke their volumes attached to live VMs.
So, it seems the issue is related to the fact that in Nova's code we nuke the VM first and then try and delete the volumes. /github. com/openstack/ nova/blob/ master/ nova/compute/ manager. py#L2365- L2375
https:/
Shouldn't nova ensure that it detaches the volume first before deleting it? /github. com/openstack/ nova/blob/ master/ nova/compute/ manager. py#L2307- L2315
https:/
Or check the exception that comes back from the volume_api.delete call and see if it complains because it's attached, then call terminate_ connection, then delete.