Gluster-FS volume snapshots are stuck in error_deleting if trying to delete them when the volume is attached

Bug #1245910 reported by Dafna Ron
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Cinder
Invalid
Undecided
Unassigned

Bug Description

gluster is configured as my cinder backend
I created an instance snapshot from instance booted from volume and as well as a new image we also create a new volume snapshot.
I tried to delete the snapshot while the instance is still up (so volume is still attached)
and the snapshot is stuck in error_deleting.

if we delete the snapshot after destroying the instance (so volume no longer attached) we succeed.

reproduce:
boot instance from volume
take a snapshot for instance (from horizon).
try to delete the volume snapshot while the instance is still running

2013-10-29 16:20:58.654 12921 ERROR cinder.openstack.common.rpc.amqp [req-1d0e180b-1b67-409a-a5ca-54209d9ea36b f0752992e0b94e10bbf4bf133d7dcfb7 2ce14710a7b341878871e9a9ee06dc64] Exception during message handling
2013-10-29 16:20:58.654 12921 TRACE cinder.openstack.common.rpc.amqp Traceback (most recent call last):
2013-10-29 16:20:58.654 12921 TRACE cinder.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/amqp.py", line 441, in _process_data
2013-10-29 16:20:58.654 12921 TRACE cinder.openstack.common.rpc.amqp **args)
2013-10-29 16:20:58.654 12921 TRACE cinder.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/dispatcher.py", line 148, in dispatch
2013-10-29 16:20:58.654 12921 TRACE cinder.openstack.common.rpc.amqp return getattr(proxyobj, method)(ctxt, **kwargs)
2013-10-29 16:20:58.654 12921 TRACE cinder.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/cinder/utils.py", line 808, in wrapper
2013-10-29 16:20:58.654 12921 TRACE cinder.openstack.common.rpc.amqp return func(self, *args, **kwargs)
2013-10-29 16:20:58.654 12921 TRACE cinder.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/cinder/volume/manager.py", line 424, in delete_snapshot
2013-10-29 16:20:58.654 12921 TRACE cinder.openstack.common.rpc.amqp {'status': 'error_deleting'})
2013-10-29 16:20:58.654 12921 TRACE cinder.openstack.common.rpc.amqp File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__
2013-10-29 16:20:58.654 12921 TRACE cinder.openstack.common.rpc.amqp self.gen.next()
2013-10-29 16:20:58.654 12921 TRACE cinder.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/cinder/volume/manager.py", line 412, in delete_snapshot
2013-10-29 16:20:58.654 12921 TRACE cinder.openstack.common.rpc.amqp self.driver.delete_snapshot(snapshot_ref)
2013-10-29 16:20:58.654 12921 TRACE cinder.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/cinder/volume/drivers/glusterfs.py", line 598, in delete_snapshot
2013-10-29 16:20:58.654 12921 TRACE cinder.openstack.common.rpc.amqp online_delete_info)
2013-10-29 16:20:58.654 12921 TRACE cinder.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/cinder/volume/drivers/glusterfs.py", line 756, in _delete_snapshot_online
2013-10-29 16:20:58.654 12921 TRACE cinder.openstack.common.rpc.amqp raise exception.GlusterfsException(msg)
2013-10-29 16:20:58.654 12921 TRACE cinder.openstack.common.rpc.amqp GlusterfsException: Unable to delete snapshot ec9b5c1d-ec65-4683-b29e-1afd98af1fde, status: error_deleting.
2013-10-29 16:20:58.654 12921 TRACE cinder.openstack.common.rpc.amqp
2013-10-29 16:36:00.579 12921 WARNING cinder.quota [req-fcb23067-21ba-409d-ad5a-2e8bf3f08bd8 f0752992e0b94e10bbf4bf133d7dcfb7 2ce14710a7b341878871e9a9ee06dc64] Deprecated: Default quota for resource: gigabytes is set by the default quota flag: quota_gigabytes, it is now deprecated. Please use the the default quota class for default quota.

tags: added: gluster-fs
summary: - volume snapshots are stuck in error_deleting if trying to delete them
- when the volume is attached
+ Gluster-FS volume snapshots are stuck in error_deleting if trying to
+ delete them when the volume is attached
Eric Harney (eharney)
tags: added: glusterfs
removed: gluster-fs
Mike Perez (thingee)
tags: added: drivers
Revision history for this message
Eric Harney (eharney) wrote :

It isn't clear what caused this without seeing the log from the Nova side. Could be caused by using a libvirt without the requisite fixes for this feature.

Changed in cinder:
status: New → Incomplete
Revision history for this message
Dafna Ron (dron-3) wrote :

the nova logs are located in bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1024391

Revision history for this message
Eric Harney (eharney) wrote :

Nova compute log indicated:
TRACE nova.virt.libvirt.driver libvirtError: virDomainGetBlockJobInfo() failed
upon snapshot delete.

This likely indicates using a version of libvirt which had known bugs in it in this area. Closing pending further info on reproduction.

Changed in cinder:
status: Incomplete → Invalid
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.