glance allows delete even if cannot be deleted from ceph backend store
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
glance_store |
Fix Released
|
Medium
|
Edward Hope-Morley |
Bug Description
The following steps *should* reproduce the issue. Basically, if I have a copy-on-write cloned vol from image, glance allows me to delet the image from glance itself even if the image cannot be deleted from backend store thus requiring manual deletion from Ceph cluster.
Upload an image to Glance. This will create a snapshot of the raw image.
glance image-create --name="testimage" --is-public="true" --disk-format="raw" --container-
Create a cinder volume clone of the image
nova volume-create --image-id <img-id> --display-name test-vol 4
When can then see that we have a snaphot image in the glance/images pool and a cloned image in the cinder/volumes pool
rbd -p images ls| grep <img-id> # returns <img-id>
rbd -p volumes ls| grep vol-<vol-id> # returns vol-<vol-id>
Now if I delete the glance image...
glance delete <img-id>
I get a failure as expected since the snapshot is in use
Request returned failure status.
Traceback (most recent call last):
File "/usr/lib/
result = self.applicatio
File "/usr/lib/
resp = self.call_func(req, *args, **self.kwargs)
File "/usr/lib/
return self.func(req, *args, **kwargs)
File "/usr/lib/
response = req.get_
File "/usr/lib/
File "/usr/lib/
app_iter = application(
File "/usr/lib/
return self.app(env, start_response)
File "/usr/lib/
resp = self.call_func(req, *args, **self.kwargs)
File "/usr/lib/
return self.func(req, *args, **kwargs)
File "/usr/lib/
response = req.get_
File "/usr/lib/
File "/usr/lib/
app_iter = application(
File "/usr/lib/
return app(environ, start_response)
File "/usr/lib/
return resp(environ, start_response)
File "/usr/lib/
response = self.app(environ, start_response)
File "/usr/lib/
return resp(environ, start_response)
File "/usr/lib/
resp = self.call_func(req, *args, **self.kwargs)
File "/usr/lib/
return self.func(req, *args, **kwargs)
File "/usr/lib/
request, **action_args)
File "/usr/lib/
return method(*args, **kwargs)
File "/usr/lib/
return func(self, req, *args, **kwargs)
File "/usr/lib/
File "/usr/lib/
File "/usr/lib/
return delete_
File "/usr/lib/
return store.delete(loc)
File "/usr/lib/
raise exception.
InUseByStore: The image cannot be deleted because it is in use through the backend store outside of Glance.
(HTTP 500)
But the image has been deleted from glance so there is no trace of it other than in Ceph itself
glance index| grep <img-id> # returns nothing
Now if I delete the cloned volume...
nova volume-delete <vol>
The rbd image is still around and I have no way of deleting it other than going
into Ceph and manually deleting it.
rbd -p images ls| grep <img-id> # returns <img-id>
rbd -p volumes ls| vol-<vol-id> # returns nothing
I suggest we disallow deleting the image from Glance if it is 'in-use'. Perhaps
we could even give the user info on who/what is using the image so they can
resolve dependencies.
description: | updated |
Changed in glance: | |
assignee: | nobody → Edward Hope-Morley (hopem) |
Changed in glance: | |
status: | New → In Progress |
Changed in glance: | |
importance: | Undecided → Medium |
tags: | added: rbd |
affects: | glance → glance-store |
Ok how about this for a workaround. When image is deleted in Glance, if the image has clone(s) it will fail (but Glance delete succeed). In this case we could create a flag (object?) to indicate that the image is no longer needed by Glance. Then each time glance performs a delete (and/or other ops?) it can check these 'flags' and attempt to delete the volume if clones/children no longer exist.