fail to delete a snapshot after creating a volume from it with RBD driver

Bug #1485897 reported by Yogev Rabl
44
This bug affects 9 people
Affects Status Importance Assigned to Milestone
Cinder
Won't Fix
Undecided
Unassigned

Bug Description

Description of problem:
The deletion of a snapshot fails with the error

2015-08-18 11:11:39.198 4418 ERROR cinder.volume.manager [req-e721e329-ba76-4088-bce6-dbfb0757fe05 b7bc6cb74f5d4935b8a994d4c4582de2 e577bce6013b4fde98e5e44bc82e7ca1 - - -] Cannot delete snapshot 0b33c534-7f52-4aa5-9fbb-725eb929b771: snapshot is busy

while exists a volume that was created from it.

Version-Release number of selected component (if applicable):
python-cinder-2015.1.0-3.el7ost.noarch
ceph-common-0.80.8-7.el7cp.x86_64
openstack-cinder-2015.1.0-3.el7ost.noarch
python-cinderclient-1.2.1-1.el7ost.noarch

How reproducible:
100%

Steps to Reproduce:
1. Create a volume
2. Take a snapshot of the volume
3. Create a new volume from the snapshot
4. Delete the snapshot
5. Check the snapshot list

Actual results:
The snapshot stays in a available state

Expected results:
The Cinder client will provide an output that explains why the action failed or set the snapshot with a deleted tag in the Ceph pool and delete the snapshot record in Cinder

summary: - fail to delete a snapshot after creating a volume it with RBD driver
+ fail to delete a snapshot after creating a volume from it with RBD
+ driver
Revision history for this message
tobe (chendihao) wrote :

The expected result is that we're not able to delete the snapshot and we get the error message about that.

Is anybody working on this?

Revision history for this message
Eric Harney (eharney) wrote :

This will be fixed by this patch:

https://review.openstack.org/#/c/281550/

Revision history for this message
Ivan Kolodyazhny (e0ne) wrote :

IMO, we have to set rbd_flatten_volume_from_snapshot=True in cinder.conf to avoid this situation.
    cfg.BoolOpt('rbd_flatten_volume_from_snapshot',
                default=False,
                help='Flatten volumes created from snapshots to remove '
                     'dependency from volume to snapshot'),

Changed in cinder:
status: New → Won't Fix
Revision history for this message
Andres Toomsalu (andres-active) wrote :

We did set rbd_flatten_volume_from_snapshot=True in cinder.conf and restarted all cinder services - but it does not seem to have any effect on the issue - still getting "Delete snapshot failed, due to snapshot busy." when child volume does exist.

cat /etc/cinder/cinder.conf | grep rbd_flatten_volume_from_snapshot
rbd_flatten_volume_from_snapshot=True

apt list --installed | grep cinder
cinder-api/mos9.0-updates,now 2:8.1.0-6~u14.04+mos14 all [installed]
cinder-backup/mos9.0-updates,now 2:8.1.0-6~u14.04+mos14 all [installed]
cinder-common/mos9.0-updates,now 2:8.1.0-6~u14.04+mos14 all [installed,automatic]
cinder-scheduler/mos9.0-updates,now 2:8.1.0-6~u14.04+mos14 all [installed]
cinder-volume/mos9.0-updates,now 2:8.1.0-6~u14.04+mos14 all [installed]
python-cinder/mos9.0-updates,now 2:8.1.0-6~u14.04+mos14 all [installed,automatic]
python-cinderclient/mos9.0-updates,now 1:1.6.0-3~u14.04+mos4 all [installed,automatic]

apt list --installed | grep ceph
ceph/mos9.0,now 0.94.6-1~u14.04+mos1 amd64 [installed]
ceph-common/mos9.0,now 0.94.6-1~u14.04+mos1 amd64 [installed,automatic]
ceph-deploy/mos9.0,now 1.5.20-0~u14.04+mos2 all [installed]
libcephfs1/mos9.0,now 0.94.6-1~u14.04+mos1 amd64 [installed,automatic]
python-ceph/mos9.0,now 0.94.6-1~u14.04+mos1 all [installed]
python-cephfs/mos9.0,now 0.94.6-1~u14.04+mos1 amd64 [installed,automatic]

Revision history for this message
Andres Toomsalu (andres-active) wrote :

Did debug create_volume_from_snapshot rbd.py driver function:

LOG.debug("blabla %s", self.configuration.rbd_flatten_volume_from_snapshot)

cat /etc/cinder/cinder.conf | grep rbd_flatten
#rbd_flatten_volume_from_snapshot = false
rbd_flatten_volume_from_snapshot = true

service cinder-volume restart
cinder-volume stop/waiting
cinder-volume start/running, process 16343

tail -f /var/log/cinder-all.log | grep blabla
<159>Jan 20 09:51:07 node-6 cinder-volume: 2017-01-20 09:51:07.378 16451 DEBUG cinder.volume.drivers.rbd [req-fb88af6f-015d-42ea-85b8-2857ec26c27a 0832297ba2e94aacb7fe416c4933b24e 89baa53c631748c2807c9f41bc853090 - - -] blabla False create_volume_from_snapshot /usr/lib/python2.7/dist-packages/cinder/volume/drivers/rbd.py:596

Revision history for this message
Andres Toomsalu (andres-active) wrote :

Culprit found - rbd_flatten_volume_from_snapshot option should be under [RBD-backend] section - instead of [DEFAULT]. Default cinder.conf needs fixing...

Revision history for this message
Tzach Shefi (tshefi) wrote :

FYI this is still a valid bug on Queens :(
Version:
openstack-cinder-12.0.1-0.20180326201852.46c4ec1.el7ost.noarch
puppet-cinder-12.3.1-0.20180222074326.18152ac.el7ost.noarch
python2-cinderclient-3.5.0-1.el7ost.noarch
python-cinder-12.0.1-0.20180326201852.46c4ec1.el7ost.noarch
ceph-common-12.2.1-46.el7cp.x86_64
collectd-ceph-5.8.0-5.el7ost.x86_64
puppet-ceph-2.5.1-0.20180305100232.928fb38.el7ost.noarch
ceph-selinux-12.2.1-46.el7cp.x86_64
libcephfs2-12.2.1-46.el7cp.x86_64
ceph-base-12.2.1-46.el7cp.x86_64
python-cephfs-12.2.1-46.el7cp.x86_64
ceph-mon-12.2.1-46.el7cp.x86_64

Same reproduce steps and results.

Revision history for this message
Rima Khoury (rimakhoury) wrote :

also valid on openstack rocky

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.