ceph (hammer, firefly): potential data loss when snapshotting rbd volume

Bug #1532967 reported by Dmitry Borodaenko
14
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Mirantis OpenStack
Fix Released
High
Alexei Sheplyakov
7.0.x
Won't Fix
High
Alexei Sheplyakov
8.0.x
Fix Released
High
Alexei Sheplyakov

Bug Description

As reported on ceph-devel, the Ceph version shipped with 7.0 (0.80.9) can be sent into a potentially data loss inducing crash by OpenStack services:
http://www.spinics.net/lists/ceph-devel/msg28018.html

Please consider upgrading Ceph for MOS 7.0 and earlier versions to the latest Firefly release (0.80.11) and cherry-picking the patches identified in the email to address this issue:
http://www.spinics.net/lists/ceph-devel/msg28019.html

Tags: area-ceph
Changed in mos:
milestone: 9.0 → 7.0-updates
Revision history for this message
Alexei Sheplyakov (asheplyakov) wrote :

MOS 8.0 includes ceph hammer (0.94.5) which is not affected by this problem.
Also recently we've prepared an update [1] of ceph in MOS 7.0 to the latest firefly release (0.80.10 at that time, actually the same version is shipped with Ubuntu 14.04).
However the maintenance team rejected the update and decided to maintain a fork.
In particular the fix of CVE-2015-5245 has been backported to 0.80.9 [3] (instead of updating to 0.80.11).

[1] https://review.fuel-infra.org/13608
[2] https://review.fuel-infra.org/14876
[3] http://tracker.ceph.com/issues/12942

summary: - Potential data loss bug in old versions of Ceph 0.80 Firefly
+ ceph firefly: ponential data loss when snapshotting rbd volume
Revision history for this message
Fuel Devops McRobotson (fuel-devops-robot) wrote : Related fix proposed to packages/trusty/ceph (7.0)

Related fix proposed to branch: 7.0
Change author: Alexei Sheplyakov <email address hidden>
Review: https://review.fuel-infra.org/16073

Revision history for this message
Alexei Sheplyakov (asheplyakov) wrote : Re: ceph firefly: ponential data loss when snapshotting rbd volume

CVE 2015-5245 is a radosgw bug and has nothing to do with this problem, removing the CVE link

Revision history for this message
Alexei Sheplyakov (asheplyakov) wrote :

Apparently ceph hammer is also affected: http://tracker.ceph.com/issues/14428

summary: - ceph firefly: ponential data loss when snapshotting rbd volume
+ ceph (hammer, firefly): ponential data loss when snapshotting rbd volume
summary: - ceph (hammer, firefly): ponential data loss when snapshotting rbd volume
+ ceph (hammer, firefly): potential data loss when snapshotting rbd volume
Revision history for this message
Fuel Devops McRobotson (fuel-devops-robot) wrote : Fix proposed to packages/trusty/ceph (8.0)

Fix proposed to branch: 8.0
Change author: Alexei Sheplyakov <email address hidden>
Review: https://review.fuel-infra.org/16535

Revision history for this message
Fuel Devops McRobotson (fuel-devops-robot) wrote : Fix merged to packages/trusty/ceph (8.0)

Reviewed: https://review.fuel-infra.org/16535
Submitter: Pkgs Jenkins <email address hidden>
Branch: 8.0

Commit: 62f26b7c41182509cea61848be2f35aa9968cd32
Author: Alexei Sheplyakov <email address hidden>
Date: Thu Jan 28 15:45:48 2016

Make the recovery after a crash during snapshotting easier

Added an upstream patch [1] which solves the problem.

[1] https://github.com/ceph/ceph/commit/aba6746b850e9397ff25570f08d0af8847a7162c

Closes-Bug: #1532967
Change-Id: I987cb9481507a02bf91c4bdd4e8669897c8fdde6

Revision history for this message
Maksym Strukov (unbelll) wrote :

root@node-2:~# ceph --version
ceph version 0.94.5 (9764da52395923e0b32908d83a9f7304401fee43)

in 8.0-570

Revision history for this message
Alexey Stupnikov (astupnikov) wrote :

We no longer support MOS5.1, MOS6.0, MOS6.1
We deliver only Critical/Security fixes to MOS7.0, MOS8.0.
We deliver only High/Critical/Security fixes to MOS9.2.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.