VMware: cascade delete fails with InvalidSnapshot

Bug #1701393 reported by Vipin Balachandran
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Cinder
Fix Released
Low
Vipin Balachandran

Bug Description

Volume cascade delete fails with:

2017-06-29 15:05:49.893 ERROR oslo_messaging.rpc.server [req-fbc466b5-78a5-461d-a5dd-157e71d1cb77 demo None] Exception during message handling: InvalidSnapshot: Invalid snapshot: Delete snapshot of volume not supported in state: deleting.
2017-06-29 15:05:49.893 TRACE oslo_messaging.rpc.server Traceback (most recent call last):
2017-06-29 15:05:49.893 TRACE oslo_messaging.rpc.server File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 153, in _process_incoming
2017-06-29 15:05:49.893 TRACE oslo_messaging.rpc.server res = self.dispatcher.dispatch(message)
2017-06-29 15:05:49.893 TRACE oslo_messaging.rpc.server File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 213, in dispatch
2017-06-29 15:05:49.893 TRACE oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args)
2017-06-29 15:05:49.893 TRACE oslo_messaging.rpc.server File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 183, in _do_dispatch
2017-06-29 15:05:49.893 TRACE oslo_messaging.rpc.server result = func(ctxt, **new_args)
2017-06-29 15:05:49.893 TRACE oslo_messaging.rpc.server File "<decorator-gen-240>", line 2, in delete_volume
2017-06-29 15:05:49.893 TRACE oslo_messaging.rpc.server File "/opt/stack/cinder/cinder/coordination.py", line 298, in _synchronized
2017-06-29 15:05:49.893 TRACE oslo_messaging.rpc.server return f(*a, **k)
2017-06-29 15:05:49.893 TRACE oslo_messaging.rpc.server File "<decorator-gen-239>", line 2, in delete_volume
2017-06-29 15:05:49.893 TRACE oslo_messaging.rpc.server File "/opt/stack/cinder/cinder/objects/cleanable.py", line 207, in wrapper
2017-06-29 15:05:49.893 TRACE oslo_messaging.rpc.server result = f(*args, **kwargs)
2017-06-29 15:05:49.893 TRACE oslo_messaging.rpc.server File "/opt/stack/cinder/cinder/volume/manager.py", line 800, in delete_volume
2017-06-29 15:05:49.893 TRACE oslo_messaging.rpc.server new_status)
2017-06-29 15:05:49.893 TRACE oslo_messaging.rpc.server File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2017-06-29 15:05:49.893 TRACE oslo_messaging.rpc.server self.force_reraise()
2017-06-29 15:05:49.893 TRACE oslo_messaging.rpc.server File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
2017-06-29 15:05:49.893 TRACE oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb)
2017-06-29 15:05:49.893 TRACE oslo_messaging.rpc.server File "/opt/stack/cinder/cinder/volume/manager.py", line 777, in delete_volume
2017-06-29 15:05:49.893 TRACE oslo_messaging.rpc.server self.delete_snapshot(context, s)
2017-06-29 15:05:49.893 TRACE oslo_messaging.rpc.server File "<decorator-gen-242>", line 2, in delete_snapshot
2017-06-29 15:05:49.893 TRACE oslo_messaging.rpc.server File "/opt/stack/cinder/cinder/coordination.py", line 298, in _synchronized
2017-06-29 15:05:49.893 TRACE oslo_messaging.rpc.server return f(*a, **k)
2017-06-29 15:05:49.893 TRACE oslo_messaging.rpc.server File "/opt/stack/cinder/cinder/volume/manager.py", line 1108, in delete_snapshot
2017-06-29 15:05:49.893 TRACE oslo_messaging.rpc.server snapshot.save()
2017-06-29 15:05:49.893 TRACE oslo_messaging.rpc.server File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2017-06-29 15:05:49.893 TRACE oslo_messaging.rpc.server self.force_reraise()
2017-06-29 15:05:49.893 TRACE oslo_messaging.rpc.server File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
2017-06-29 15:05:49.893 TRACE oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb)
2017-06-29 15:05:49.893 TRACE oslo_messaging.rpc.server File "/opt/stack/cinder/cinder/volume/manager.py", line 1098, in delete_snapshot
2017-06-29 15:05:49.893 TRACE oslo_messaging.rpc.server self.driver.delete_snapshot(snapshot)
2017-06-29 15:05:49.893 TRACE oslo_messaging.rpc.server File "/opt/stack/cinder/cinder/volume/drivers/vmware/vmdk.py", line 694, in delete_snapshot
2017-06-29 15:05:49.893 TRACE oslo_messaging.rpc.server self._delete_snapshot(snapshot)
2017-06-29 15:05:49.893 TRACE oslo_messaging.rpc.server File "/opt/stack/cinder/cinder/volume/drivers/vmware/vmdk.py", line 685, in _delete_snapshot
2017-06-29 15:05:49.893 TRACE oslo_messaging.rpc.server raise exception.InvalidSnapshot(reason=msg)
2017-06-29 15:05:49.893 TRACE oslo_messaging.rpc.server InvalidSnapshot: Invalid snapshot: Delete snapshot of volume not supported in state: deleting.
2017-06-29 15:05:49.893 TRACE oslo_messaging.rpc.server

Tags: drivers vmware
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to cinder (master)

Fix proposed to branch: master
Review: https://review.openstack.org/479095

Changed in cinder:
status: New → In Progress
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to cinder (master)

Reviewed: https://review.openstack.org/479095
Committed: https://git.openstack.org/cgit/openstack/cinder/commit/?id=c4cc7df0bd89d4d8348a05d8e42a7b24afdca844
Submitter: Jenkins
Branch: master

commit c4cc7df0bd89d4d8348a05d8e42a7b24afdca844
Author: Vipin Balachandran <email address hidden>
Date: Thu Jun 29 16:35:27 2017 -0700

    VMware: Fix volume cascade delete

    VMDK driver does not allow snapshot deletion if the volume
    is attached. Currently we assume the volume to be attached
    if it's state is not 'available', which is wrong. During
    cascade delete, the volume status is set to 'deleting'.
    Therefore, snapshot deletion fails even if the volume is
    not attached. Fixing this by using volume attachment info
    to check for attached volumes.

    Change-Id: I83874d6d287fe3950b003f7eac74db9a1602432e
    Closes-bug: #1701393

Changed in cinder:
status: In Progress → Fix Released
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/cinder 11.0.0.0b3

This issue was fixed in the openstack/cinder 11.0.0.0b3 development milestone.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.