Unrescue will not remove rescue disk in ceph when image_type=rbd

Bug #1478199 reported by Chung Chih, Hung on 2015-07-25
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Chung Chih, Hung

Bug Description

This bug will happen when using libvirt/QEMU and image_type=rbd.

Rescue instance will produce rescue kernel and ramdisk disk in local.
It will also product rescue disk which will saved in ceph by rbd.
When users want to unrescue instance, nova will remove all rescue kernel and ramdisk disk in local.
But rescue disk which was created in rescue step will still exist.
We can using rbd or rados command to show whether objects was still existed in pool or not.
For example:
sudo rbd --pool $POOL_NAME ls | grep .rescue
sudo rados --pool $POOL_NAME ls | grep .rescue

Why it will happen?
Because of unrescue action will remove local rescue file and lvm disk but it didn't remove rdb disk.
Therefore we need to add libvirt images_type condition statement which will remove correct type of disk.

Changed in nova:
assignee: nobody → lyanchih (lyanchih)
status: New → In Progress
Yaguang Tang (heut2008) on 2015-07-26
tags: added: kilo-backport-potential
tags: added: rescue
tags: added: ceph

Change abandoned by Sean Dague (<email address hidden>) on branch: master
Review: https://review.openstack.org/205766
Reason: This review is > 6 weeks without comment, and failed Jenkins the last time it was checked. We are abandoning this for now. Feel free to reactivate the review by pressing the restore button and leaving a 'recheck' comment to get fresh test results.

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers