So when nove-compute is down and nova deletes the vm with local delete logic, the vm in hypervisor is still alive and the volume is still attached to the vm. Only when the vm is destroyed, the volume can be deleted successfully.
Take a look at the cinder third part driver, we can see that most of drivers leave these two function empty (except dell_emc, dothill, hpe and so on). So I guess most of the drivers has this problem as well.
LVM works well because LVM removes export and remove the iscsi target by itself:
https:/ /github. com/openstack/ cinder/ blob/master/ cinder/ volume/ targets/ iscsi.py# L220
But ceph does nothing.
https:/ /github. com/openstack/ cinder/ blob/master/ cinder/ volume/ drivers/ rbd.py# L1019 /github. com/openstack/ cinder/ blob/master/ cinder/ volume/ drivers/ rbd.py# L1043
https:/
So when nove-compute is down and nova deletes the vm with local delete logic, the vm in hypervisor is still alive and the volume is still attached to the vm. Only when the vm is destroyed, the volume can be deleted successfully.
Take a look at the cinder third part driver, we can see that most of drivers leave these two function empty (except dell_emc, dothill, hpe and so on). So I guess most of the drivers has this problem as well.
reproduce: paste.openstack .org/show/ 602801/
http://