EC: Swift can return corrupted Data and be able to go data lost at isa_l_rs_vand policy with >=5 parities

Bug #1639691 reported by Kota Tsuyuzaki on 2016-11-07
12
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Object Storage (swift)
Medium
Unassigned
OpenStack Security Advisory
Undecided
Unassigned
PyECLib
Undecided
Unassigned
liberasurecode
Undecided
Unassigned

Bug Description

This is what I have been fighting in the last week and here I'd like to summarize/describe for the relations and impact to Swift/PyECLib/liberasurecode.

I DO NOT think this is a security risk because this doesn't cause an information leak or significant resource attack from outside, but this includes significant impact for users because of the risk of data lost. This is why I set this as private-security temporary. We can make this as open after notmyname confirm the impact and according to his decision.

To Swift API:
Using isa_l_rs_vand EC policy, pyeclib <= 1.3.1, liberasurecode <=1.2.1 with >= 5 ec_n_parity setting, you can see corrupted data *RARELY* *WITHOUT ANY ERRORS* at downloading. If you are using 'swift' command to download, the error appears most likely,

ubuntu@saio:~$ swift download ectest vagrant/.scratch/swift.tar.gz -o delete.me
    Error downloading object 'ectest/vagrant/.scratch/swift.tar.gz': u'Error downloading vagrant/.scratch/swift.tar.gz: md5sum != etag, 4f327eed4b9b9210d13a30c9441fefa1 != 5b60a22fcf65a356391cbdb08b9140d8'

And one more significant point is NOTHING QUARANTINED from the fact so swift is now regarding the corrupted one as a correct object.

The reason, I said "RARELY" above is that occurs due to the fragments combinations. You can see such buggy combinations with following script, that asserts isa_l_rs_vand ec_ndata=10, ec_n_parity=5 case:

from pyeclib.ec_iface import ECDriver
from itertools import combinations

if __name__ == '__main__':
    driver = ECDriver(ec_type='isa_l_rs_vand', k=10, m=5)
    orig_data = 'a' * 1024 * 1024
    encoded = driver.encode(orig_data)

    # make all fragment like (index, frag_data) format to feed to combinations
    frags = [(i, frag) for i, frag in enumerate(encoded)]
    bad_combinations = []
    check_frags_combinations = list(combinations(frags, 10))
    for check_frags in check_frags_combinations:
        check_frags_dict = dict(check_frags)
        decoded = driver.decode(check_frags_dict.values())
        # No No don't kidding me
        try:
            assert decoded == orig_data
        except AssertionError:
            bad_combinations.append(list(check_frags_dict))

    if bad_combinations:
        ratio = float(len(bad_combinations)) / len(check_frags_combinations)
        print 'Bad combinations found: %s/%s (ratio: %s)' % (
            len(bad_combinations), len(check_frags_combinations), ratio)
        print bad_combinations

The result of the script should be:

Bad combinations found: 10/3003 (ratio: 0.00333000333)
[[0, 1, 2, 3, 5, 6, 8, 10, 11, 14], [0, 1, 2, 3, 5, 7, 8, 10, 13, 14], [0, 1, 2, 4, 5, 7, 9, 10, 11, 14], [0, 1, 2, 4, 6, 7, 9, 10, 13, 14], [0, 1, 3, 4, 6, 8, 9, 10, 11, 14], [0, 1, 3, 5, 6, 8, 9, 10, 13, 14], [0, 2, 3, 5, 7, 8, 9, 10, 11, 14], [0, 2, 4, 5, 7, 8, 9, 10, 13, 14], [1, 2, 4, 6, 7, 8, 9, 10, 11, 14], [1, 3, 4, 6, 7, 8, 9, 10, 13, 14]]

So the bad combinations appears approximately in 1/300 of the time.

The bug reason:

The bug exists in liberasurecode isa-l backend implementation. I've already asked to the isa-l maintainer about the possiblity that isa-l can return such a corrupted data without errors[1]. He honestly point out the issue that gf_gen_rs_matrix (which is used in isa_l_rs_vand encode/decode/reconstruct process) could make non-invertable matrix that means "the matrix can have the combination to be unrecoverable". So the bad pattern should exist for the isa-l constraint. However, he points out another issue that "we can find the case if you asserts the return value of gf_invert_matrix" and liberasurecode doesn't. That is the main reason, Swift/PyECLib/liberasurecode can make such a garbage decoding result.

I made a patch[2] that fixes to enable liberasurecode to be able to return Error instead of corrupted data. With [2], Swift will be able to return Error (sort of 5xx) in the bad pattern case.

Still impact to existing clusters:

If you have existing cluster with >=5 parities isa-l policy, it is still problematic even if you patch the fix [2]. That is probably because you are running the *object-reconstructor* in the cluster. As well as Swift API impact, object-reconstructor is also impacted the issue. The following script is an extent from the previous script that asserts what happen if a fragment was replaced in a good pattern with the fragment reconstructed from a bad pattern.

from pyeclib.ec_iface import ECDriver
from itertools import combinations

def test_ec_driver(k, m):
    driver = ECDriver(ec_type='isa_l_rs_vand', k=k, m=m)
    orig_data = 'a' * 1000
    encoded = driver.encode(orig_data)

    # make all fragment like (index, frag_data) format to feed to combinations
    frags = [(i, frag) for i, frag in enumerate(encoded)]
    an_ok_pattern = None

    for check_frags in combinations(frags, k):
        check_frags_dict = dict(check_frags)
        decoded = driver.decode(check_frags_dict.values())
        # No No don't kidding me
        try:
            assert decoded == orig_data
            if not an_ok_pattern:
                # save this ok pattern
                an_ok_pattern = dict(check_frags_dict)
        except AssertionError:
            # find missing index
            for x in range(k + m):
                if x not in check_frags_dict:
                    break
            else:
                raise Exception('something wrong')
            print 'try reconstruct index %s with %s' % (
                x, check_frags_dict.keys())
            # try to reconstruct
            reconed = driver.reconstruct(check_frags_dict.values(), [x])[0]
            try:
                assert reconed == frags[x]
            except AssertionError:
                if x not in an_ok_pattern:
                    raise Exception('need to make this script better')
                # sanity check
                decoded = driver.decode(an_ok_pattern.values())
                assert decoded == orig_data
                print 'this is ok'
                # try to decode again including the garbage reconstructed one
                an_ok_pattern[x] = reconed
                decoded = driver.decode(an_ok_pattern.values())
                assert decoded == orig_data
                print 'this is ok if the assertion above passed'

if __name__ == '__main__':
    params = [(10, 5)]
    for k, m in params:
        test_ec_driver(k, m)

As you can see, the bad pattern can infect the corrupted frags to a good pattern. And then, right now, we have no way to find a good pattern, including the infected garbage as a bad pattern unless we calculate the original object etag with decoding whole fragments in the fragment archive. Even if once we can find the bad pattern, I don't think the way to detect which one or both is (are) bad one(s) exist so that it means, once infected, we cannot stop the pandemic anymore and the infected object will go to the garbage in all fragment archives step by step. This is the worst problem.

I think, to the new incoming objects and brand new cluster deployment, the fix [2] is enough if you can accept to see the errors in 1/300 of the time.

What we can do for this pandemic:

For the possible patches perspective, we should land [2] which can prevent *future pandemic* as soon as possible.

For existing cluster users, sorry, what I can do is just saying "Please plan to migrate to a safer policy as soon as possible before the pandemic goes to data lost". And I'm trying to add the greedy testing [5] to ensure the result consistency in decode/reconstruct that we can know which parameters can be safe. Otherwise, we may be able to leave same pandemic possiblity to another policy.

For isa-l perspective, we know isa-l is the most high-performance open sourced engine in the available backends so the constraint will be a problem to choose. To solve the bad combination, the isa-l maintainer suggested to use another matrix (gf_gen_caucy1_matrix) for the backend which can be invertable at 5>=parities case. I had worked at [3][4] as the draft. And in my experiment, it shows as similar high-perforemance as isa_r_rs_vand without the constraint we hit.

PyECLib/liberasurecode Version Consideration:

In my opinion, we should fix [2] as soon as possible and set it as the Swift requirement to prevent the pandemic in the future. However, right now we don't have the way to set the c lang binary version constraint. To enable the requirement, my idea is as following steps:

1. [2] land at liberasurecode master
2: bump liberasurecode version
3: make hard-coded constraint for PyECLib to require the liberasurecode >= (step2) (this can be painful becuase xenial or yakkety is =< 1.2.1)
4: bump pyeclib version
5: set the pyeclib requirement in Swift requirements.txt (<- this can have a tag #closes-bug for this bug report in the commit message)

Or any better idea is always welcome to me.

1: https://github.com/01org/isa-l/issues/10
2: https://review.openstack.org/#/c/393595/
3: https://review.openstack.org/#/c/393263/
4: https://review.openstack.org/#/c/393276/
5: https://review.openstack.org/#/c/393656/

Revision history for this message
Kota Tsuyuzaki (tsuyuzaki-kota) wrote :

Set this as private because it seems sensitive but i don't think this is security issue. Just to leave the decision how we can handle this to notmyname.

Changed in swift:
importance: Undecided → Critical
status: New → Confirmed
Revision history for this message
Tristan Cacqueray (tristan-cacqueray) wrote :

Since this report concerns a possible security risk, an incomplete security advisory task has been added while the core security reviewers for the affected project or projects confirm the bug and discuss the scope of any vulnerability along with potential solutions.

Changed in ossa:
status: New → Incomplete
description: updated
Revision history for this message
John Dickinson (notmyname) wrote :

I agree this is not a security bug, and thus doesn't need to be embargoed because of that. However, I think Kota is wise for keeping this private for now. Although there has been some limited public discussion of this issue in IRC, this is the bug that is tracking the issue.

I'd prefer that this issue stay private until we have a better-defined (and implemented or in-progress) plan for users who may be affected by this. Kota's plan in the bug description is good. I'd add that we should consider providing a migration script for users to run if they are affected.

Revision history for this message
Kota Tsuyuzaki (tsuyuzaki-kota) wrote :

Thanks John for the comment. I talked with Clay about the migration process and I'm realizing that re-putting all objects in the isa-l 5>=paritis containers can be mitigation from destroying other healthy fragments. That is because the reputing process can work as decode/encode and then the re-stored object is ensured the md5 value because the copy process set the etag from source to the destination. Once we installed the liberasurecode to prevent corrupted frag from bad pattern (and raise Error) with [2] in the description, and then reputing all, probably we can prevent the future corruption and just allowing 1/300 times errors on GET/reconstruct.

The attachment file is a draft version of the reputting script. I don't know how we should make it much robust (i.e. catch failure cases when running the reputing process) The current version works to get all containers with a specified policy and try to reput the objects in the containers and then, if it got errors like 412 unprocessable entity (etag mismatch), 500 internal server error (will occur with [2] in the bad pattern), etc..., dump the failure objects with json format which can be feed-able to the script again.

Anyway, IMO, we need [2] in the description in the liberasurecode master.

Revision history for this message
clayg (clay-gerrard) wrote :

This script seems to be for a single *account* at least it mostly seems to operate "given creds => list all containers; if policy_name == options.policy_name: do_reputs"

That's good.

I think we also need the script to be able to take a list of account names and a set of superadmin/reselleradmin creds and reput all containers - with some concurrency.

That tooling ready for master + package/release the liberasurecode change [1] is all that we need for a plan for now, IMHO.

1. https://review.openstack.org/#/c/393595

Revision history for this message
Kota Tsuyuzaki (tsuyuzaki-kota) wrote :

@Clay:

Yes, that's for an account and that's why I separated the reput_objects function as callable with for each account.

For another thought, I found an issue for the reputter scripts. That works *basically* for stored objects but I found a type of object deosn't. That is a delete marker in versioned write history mode including ';' in the content-type. That results in 400 bad request for re-putting.

I pushed a solution for that as separated patch, here https://review.openstack.org/#/c/396603/

However, I'm not sure right now, if swift has another middleware to make such a content-type which can not be re-put-able. If it's only a delete-maker, it's ok to skip them because it should be 0-byte object which is not necessary to re-decode/encode.

Revision history for this message
Kota Tsuyuzaki (tsuyuzaki-kota) wrote :

One more thing, we have to consider, how to force to use liberasruecode>=1.3.1 to prevent the crisis.

I think we have a couple of ways to solve the dependency

1. use bindep requirements with version

Since openstack project uses bindep, we can make the constraint like as https://review.openstack.org/#/c/395998/4/bindep.txt@7 (removing # comment out)

However, I confirmed the bindep requirement refers to the packaging version (e.g. apt, yum) so even if we installed the newest libersurecode version 1.3.1 from source, bindep requirements still fails due to liberasurecoe 1.1.0 in xenial.

2. Use hard coded requirement

With https://review.openstack.org/#/c/395998, I've been trying to make the requirement as hard coded. That seems to work. If you installed the newest, pyeclib works well and then old version was installed, pyeclib raises an ECIncompatibleLiberasurecode at the module loading. However, the implementation has 2 issues.

2-1. we cannot ensure the liberasurecode if the .so file is overwritten from source. That is because the assertion works from the erasurecode_version.h header file when building pyeclib c code.

2-2. with the https://review.openstack.org/#/c/395998/, we have to think of how we can install the liberasurecode from source for the gate to pass for landing the patch.

Revision history for this message
John Dickinson (notmyname) wrote :

after today's meeting, I think we have a reasonable plan for how to get releases going:

 we release swift with a changelog mentioning that deployers shoudl upgrade libec. then we also release pyeclib with a warning if libec<1.3.1. then after libec is in distros, roll those hard requirements forward through the dep chain

...from http://eavesdrop.openstack.org/meetings/swift/2016/swift.2016-11-16-21.00.log.html

----

The last thing I'd like to see before we open this bug is a script (or process) to point people to that gives a mechanism to move existing at-risk data to a safe configuration.

Revision history for this message
John Dickinson (notmyname) wrote :

For deployers who are using isa_l_rs_vand with more than 4 parity, you
should take immediate action to protect data in your cluster.

1. Upgrade to liberasurecode 1.3.1 or later
2. For any objects that are currently in an affected storage policy,
   re-put those into the cluster to fix any repairable data corruption.
   The script on comment #4 of this bug report can be used.

We expect that liberasurecode 1.3.2 will include support for
isa_l_rs_cauchy (for new policies) which will not have a
restriction on the number of parity bits.

information type: Private Security → Public
description: updated
Changed in liberasurecode:
status: New → Fix Released
Changed in pyeclib:
status: New → In Progress
Jeremy Stanley (fungi) on 2016-11-17
Changed in ossa:
status: Incomplete → Won't Fix
status: Won't Fix → Invalid
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to pyeclib (master)

Reviewed: https://review.openstack.org/398744
Committed: https://git.openstack.org/cgit/openstack/pyeclib/commit/?id=d163972bb04faf192d0ba65451dcc388b5bb19f0
Submitter: Jenkins
Branch: master

commit d163972bb04faf192d0ba65451dcc388b5bb19f0
Author: Kota Tsuyuzaki <email address hidden>
Date: Wed Nov 16 20:42:27 2016 -0800

    Add soft warning log line when using liberasurecode <1.3.1

    To apply the fix for a liberasurecode issue [1], we need hard depencency
    of liberasurecode requires >=1.3.1. However current binary dependency
    maintainance tool "bindep" works only for packagers' repository. (i.e. it
    refers the version of apt/yum/etc...) And nothing is cared for the binary
    built from source.

    This patch provides a way to detect incompatible liberasurecode and
    makes a warning log line to syslog which suggest "you're using older
    liberasurecode which will be deprecated, please upgrade it".

    NOTE:
    - This dependency managemnet depends on erasurecode_version.h header
      file in liberasurecode. i.e. it cannot care of overwritten .so library
      after PyECLib built once.

    Partial-Bug: #1639691

    1: Icee788a0931fe692fe0de31fabc4ba450e338a87

    Change-Id: Ice5e96f0a59096cc9067823f0d62d6c7065ed62f

Revision history for this message
clayg (clay-gerrard) wrote :

I think the critical nature of this bug is reduced with liberasurecode 1.3.1

I don't think swift's handling of isa-l policies can change in a reasonable way to close this issue until isa-l-rs-cauchy is available.

And as soon as isa-l-rs-cauchy support is released in pyeclib swift deployments can consume it - so I don't think this bug is a swift release blocker.

We really only talk about isa-l and liberasure here:

http://docs.openstack.org/developer/swift/overview_erasure_code.html#pyeclib-external-erasure-code-library

Changed in swift:
importance: Critical → High
Revision history for this message
clayg (clay-gerrard) wrote :

so now that isa-l-cauchy is available this issue we should consider what Swift can do when you have isa-l-rs-vand policy with > 4 parity frags that is not acknowledged as pre-existing...

I would support a change that would require such policies to be deprecated - then something in the release notes:

  * you must immediately deprecate isa-l-rs-vand policies with >4 parity or your swift won't start
  * you should migrate data out of them
  * you may upgrade liberasurecode/pyeclib and create isa-l-rs-cauchy policies with >4 parity

Some openstack policy on upgrades probably says that we can't merge a fix that require a config change to upgrade - but my better senses says preventing the proliferation of the risk of data loss is a reasonable exception. I'd feel like a real piece of work if someone ended up *creating* such a policy just because it's "only a warning".

Changed in swift:
importance: High → Critical
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to swift (master)

Reviewed: https://review.openstack.org/425496
Committed: https://git.openstack.org/cgit/openstack/swift/commit/?id=c7bffd6cee400b703c76e47f8b8602ca775d69ec
Submitter: Jenkins
Branch: master

commit c7bffd6cee400b703c76e47f8b8602ca775d69ec
Author: Tim Burke <email address hidden>
Date: Thu Jan 26 02:03:28 2017 +0000

    Warn about using EC with isa_l_rs_vand and nparity >= 5

    We *know* there are combinations that will prevent decoding, and we're
    going to be increasingly aggressive about getting it out of clusters.

    Partial-Bug: 1639691
    Change-Id: I50159c9d19f2385d5f60112e9aaefa1a68098313

Revision history for this message
clayg (clay-gerrard) wrote :

With the nice warning in place I think we've done a fair job here trying to dissuade new storage policies with bad configs. At some point we can enforce deprecation.

Changed in swift:
importance: Critical → High
importance: High → Medium
Revision history for this message
Tim Burke (1-tim-z) wrote :

Definitely will need to; may even want to have some "double secret deprecation" where you can't even create new objects in the affected containers.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to swift (stable/newton)

Fix proposed to branch: stable/newton
Review: https://review.openstack.org/436668

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to swift (stable/newton)

Reviewed: https://review.openstack.org/436668
Committed: https://git.openstack.org/cgit/openstack/swift/commit/?id=bd75a97ad47fa28025d3a769f1eb7f05072dcd25
Submitter: Jenkins
Branch: stable/newton

commit bd75a97ad47fa28025d3a769f1eb7f05072dcd25
Author: Tim Burke <email address hidden>
Date: Thu Jan 26 02:03:28 2017 +0000

    Warn about using EC with isa_l_rs_vand and nparity >= 5

    We *know* there are combinations that will prevent decoding, and we're
    going to be increasingly aggressive about getting it out of clusters.

    Partial-Bug: 1639691
    Change-Id: I50159c9d19f2385d5f60112e9aaefa1a68098313

tags: added: in-stable-newton
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to swift (master)

Reviewed: https://review.openstack.org/468105
Committed: https://git.openstack.org/cgit/openstack/swift/commit/?id=2c3ac543f4106171997752d5784732bda6e6bf3e
Submitter: Jenkins
Branch: master

commit 2c3ac543f4106171997752d5784732bda6e6bf3e
Author: Tim Burke <email address hidden>
Date: Thu May 25 09:40:47 2017 -0700

    Require that known-bad EC schemes be deprecated

    We said we were going to do it, we've had two releases saying we'd do
    it, we've even backported our saying it to Newton -- let's actually do
    it.

    Upgrade Consideration
    =====================

    Erasure-coded storage policies using isa_l_rs_vand and nparity >= 5 must
    be configured as deprecated, preventing any new containers from being
    created with such a policy. This configuration is known to harm data
    durability. Any data in such policies should be migrated to a new
    policy. See https://bugs.launchpad.net/swift/+bug/1639691 for more
    information.

    UpgradeImpact
    Related-Change: I50159c9d19f2385d5f60112e9aaefa1a68098313
    Change-Id: I8f9de0bec01032d9d9b58848e2a76ac92e65ab09
    Closes-Bug: 1639691

Changed in swift:
status: Confirmed → Fix Released
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/swift 2.15.0

This issue was fixed in the openstack/swift 2.15.0 release.

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers

Bug attachments