Cinder does not report an error when snapshot failed to be deleted

Bug #1611826 reported by Nadezhda Kabanova
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Mirantis OpenStack
Won't Fix
High
Ivan Kolodyazhny
10.0.x
Won't Fix
High
Ivan Kolodyazhny
8.0.x
Won't Fix
High
Ivan Kolodyazhny
9.x
Won't Fix
High
Ivan Kolodyazhny

Bug Description

Detailed bug description:

When volume snapshot has at least one clone (not flattened child in CEPH reference or from Openstack perspective this is volume created from snapshot) it is not possible to delete this snapshot till you don't perform flatten procedure for rbd image. This is OK, but the issue is that Horizon doesn't reports an error - only "Successfully scheduled". The same behavior if you try to delete from cli.

Steps to reproduce:

1) Create volume (glance image doesn't matter)
2) Create snapshot from volume
3) Create volume from snapshot, that you created at step2
4) Try to delete snapshot

Expected results:

Error message
"No possible to delete snapshot. It has at least one clone "
or
"No possible to delete snapshot. It has clone id:c8b9cca0-fbe1-416f-8d10-5ca9874b8fed "

Actual result:

From horizon:
Success: deletion of Volume Snapshot: snapshot_of_pap:

From cli: no output
# cinder snapshot-show c8b9cca0-fbe1-416f-8d10-5ca9874b8fed
+--------------------------------------------+--------------------------------------+
| Property | Value |
+--------------------------------------------+--------------------------------------+
| created_at | 2016-08-10T14:30:09.000000 |
| description | |
| id | c8b9cca0-fbe1-416f-8d10-5ca9874b8fed |
| metadata | {} |
| name | snapshot_of_pap |
| os-extended-snapshot-attributes:progress | 100% |
| os-extended-snapshot-attributes:project_id | c28fc7a163494073bea1f5773c946d6e |
| size | 1 |
| status | available |
| volume_id | 773c20e5-3f42-4c92-ad35-4102e6219fd1 |
# cinder snapshot-delete c8b9cca0-fbe1-416f-8d10-5ca9874b8fed
# cinder snapshot-show c8b9cca0-fbe1-416f-8d10-5ca9874b8fed
+--------------------------------------------+--------------------------------------+
| Property | Value |
+--------------------------------------------+--------------------------------------+
| created_at | 2016-08-10T14:30:09.000000 |
| description | |
| id | c8b9cca0-fbe1-416f-8d10-5ca9874b8fed |
| metadata | {} |
| name | snapshot_of_pap |
| os-extended-snapshot-attributes:progress | 100% |
| os-extended-snapshot-attributes:project_id | c28fc7a163494073bea1f5773c946d6e |
| size | 1 |
| status | available |
| volume_id | 773c20e5-3f42-4c92-ad35-4102e6219fd1 |

Workaround:
# rbd ls volumes | grep 773c20e5-3f42-4c92-ad35-4102e6219fd1
volume-773c20e5-3f42-4c92-ad35-4102e6219fd1
# rbd snap ls volumes/volume-773c20e5-3f42-4c92-ad35-4102e6219fd1
SNAPID NAME SIZE
   508 snapshot-c8b9cca0-fbe1-416f-8d10-5ca9874b8fed 1024 MB
# rbd children volumes/volume-773c20e5-3f42-4c92-ad35-4102e6219fd1@snapshot-c8b9cca0-fbe1-416f-8d10-5ca9874b8fed
volumes/volume-84e05b5c-557a-4757-ba60-8fa0c7040c2f
# rbd flatten volumes/volume-84e05b5c-557a-4757-ba60-8fa0c7040c2f
Image flatten: 100% complete...done.
Impact:

After this action we can remove snapshot as child become independent from snapshot.

Description of the environment:
 Operation system: Ubuntu 14.04
 Versions of components: MOS 8
Related projects installed:
# fuel plugins
id | name | version | package_version
---|-----------------------------|---------|----------------
1 | elasticsearch_kibana | 0.9.0 | 4.0.0
2 | influxdb_grafana | 0.9.0 | 4.0.0
3 | lma_collector | 0.9.0 | 4.0.0
4 | lma_infrastructure_alerting | 0.9.0 | 4.0.0
5 | static_ntp_routing | 4.0.6 | 4.0.0
6 | zabbix-database | 4.0.1 | 4.0.0
9 | static_routing | 4.0.7 | 4.0.0
10 | ivb5_plugin_vw | 1.0.27 | 4.0.0
7 | zabbix_monitoring | 2.6.25 | 4.0.0
11 | avi_controller_plugin | 1.0.2 | 4.0.0
12 | openbook | 1.3.4 | 4.0.0

 Network model: Neutron+OVS
Additional information:

A debug output of `# cinder --debug snapshot-delete` is almost the same when snapshot is actually deleted or when it is remaining not deleted.

# cinder --debug snapshot-delete c8b9cca0-fbe1-416f-8d10-5ca9874b8fed
DEBUG:keystoneclient.session:REQ: curl -g -i -X GET http://192.168.0.2:5000/ -H "Accept: application/json" -H "User-Agent: python-keystoneclient"
DEBUG:keystoneclient.session:RESP: [300] content-length: 591 vary: X-Auth-Token server: Apache connection: close date: Wed, 10 Aug 2016 14:45:03 GMT content-type: application/json
RESP BODY: {"versions": {"values": [{"status": "stable", "updated": "2015-03-30T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.4", "links": [{"href": "http://192.168.0.2:5000/v3/", "rel": "self"}]}, {"status": "stable", "updated": "2014-04-17T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v2.0+json"}], "id": "v2.0", "links": [{"href": "http://192.168.0.2:5000/v2.0/", "rel": "self"}, {"href": "http://docs.openstack.org/", "type": "text/html", "rel": "describedby"}]}]}}

DEBUG:keystoneclient.auth.identity.v2:Making authentication request to http://192.168.0.2:5000/v2.0/tokens
DEBUG:keystoneclient.session:REQ: curl -g -i -X GET http://192.168.0.2:8776/v2/a75515eef05a4a818d925a879be99ff9/snapshots/c8b9cca0-fbe1-416f-8d10-5ca9874b8fed -H "User-Agent: python-cinderclient" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}9d36af216253d2f7ddd9940020ef88e6343e533f"
DEBUG:keystoneclient.session:RESP: [200] content-length: 387 x-compute-request-id: req-f8e19416-a7be-47cf-b842-431b3e3c37b9 connection: close date: Wed, 10 Aug 2016 14:45:04 GMT content-type: application/json x-openstack-request-id: req-f8e19416-a7be-47cf-b842-431b3e3c37b9
RESP BODY: {"snapshot": {"status": "available", "metadata": {}, "os-extended-snapshot-attributes:progress": "100%", "name": "snapshot_of_pap", "volume_id": "773c20e5-3f42-4c92-ad35-4102e6219fd1", "os-extended-snapshot-attributes:project_id": "c28fc7a163494073bea1f5773c946d6e", "created_at": "2016-08-10T14:30:09.000000", "size": 1, "id": "c8b9cca0-fbe1-416f-8d10-5ca9874b8fed", "description": ""}}

DEBUG:keystoneclient.session:REQ: curl -g -i -X DELETE http://192.168.0.2:8776/v2/a75515eef05a4a818d925a879be99ff9/snapshots/c8b9cca0-fbe1-416f-8d10-5ca9874b8fed -H "User-Agent: python-cinderclient" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}9d36af216253d2f7ddd9940020ef88e6343e533f"
DEBUG:keystoneclient.session:RESP: [202] date: Wed, 10 Aug 2016 14:45:04 GMT connection: close content-type: text/html; charset=UTF-8 content-length: 0 x-openstack-request-id: req-3a51a348-ce6c-4454-b082-3c0202d6a62e

summary: - Cinder does not report an errror when snapshot failed to be deleted
+ Cinder does not report an error when snapshot failed to be deleted
affects: fuel → mos
Changed in mos:
assignee: nobody → MOS Cinder (mos-cinder)
milestone: none → 9.1
status: New → Confirmed
assignee: MOS Cinder (mos-cinder) → nobody
milestone: 9.1 → 10.0
no longer affects: mos/9.x
Revision history for this message
Nadezhda Kabanova (nkabanova) wrote :

Could you please also include fix for MOS 8?

description: updated
Revision history for this message
Ivan Kolodyazhny (e0ne) wrote :

Nadezhda, snapshot delete operation works in async mode, it means we can't return appropriate error message once we call API. More user-friendly messages will be implemented in scope of [1] spec in Newton. It means we can't backport it to MOS 9 or MOS 8.

From the Cinder's perspective, it's incorrect behavior. If you create volume from snapshot, you should be able to delete a snapshot. It's an issue in RBD driver and should be fixed. We could backport this fix to MOS 8 and MOS 9 once fix will be available if needed.

https://github.com/openstack/cinder-specs/blob/master/specs/newton/summarymessage.rst

Revision history for this message
Ivan Kolodyazhny (e0ne) wrote :

Cinder already has needed feature. Please change `rbd_flatten_volume_from_snapshot` param to `True' in cinder.conf to activate it

Ivan Kolodyazhny (e0ne)
Changed in mos:
status: Confirmed → Won't Fix
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.