Caching with stale data when a server disconnects due to network partition and reconnects

Bug #1819957 reported by Morgan Fainberg on 2019-03-13
12
This bug affects 2 people
Affects Status Importance Assigned to Milestone
OpenStack Identity (keystone)
High
Morgan Fainberg
OpenStack Security Advisory
Undecided
Unassigned
keystonemiddleware
High
Morgan Fainberg
oslo.cache
High
Morgan Fainberg

Bug Description

The flush_on_reconnect optional flag is not used. This can cause stale data to be utilized from a cache server that disconnected due to a network partition. This has security concerns as follows:

1* Password changes/user changes may be reverted for the cache TTL
1a* User may get locked out if PCI-DSS is on and the password change happens during the network
    partition.
2* Grant changes may be reverted for the cache TTL
3* Resources (all types) may become "undeleted" for the cache TTL
4* Tokens (KSM) may become valid again during the cache TTL

As noted in the python-memcached library:

    @param flush_on_reconnect: optional flag which prevents a
            scenario that can cause stale data to be read: If there's more
            than one memcached server and the connection to one is
            interrupted, keys that mapped to that server will get
            reassigned to another. If the first server comes back, those
            keys will map to it again. If it still has its data, get()s
            can read stale data that was overwritten on another
            server. This flag is off by default for backwards
            compatibility.

The solution is to explicitly pass flush_on_reconnect as an optional argument. A concern with this model is that the memcached servers may be utilized by other tooling and may lose cache state (in the case the oslo.cache connection is the only thing affected by the network partitioning).

This similarly needs to be addressed in pymemcache when it is utilized in lieu of python-memcached.

tags: added: caching security
Changed in keystone:
importance: Undecided → High
Changed in keystonemiddleware:
importance: Undecided → High
Changed in oslo.cache:
importance: Undecided → High
Changed in keystone:
assignee: nobody → Morgan Fainberg (mdrnstm)
Changed in keystonemiddleware:
assignee: nobody → Morgan Fainberg (mdrnstm)
Changed in oslo.cache:
assignee: nobody → Morgan Fainberg (mdrnstm)
Ben Nemec (bnemec) wrote :

"A concern with this model is that the memcached servers may be utilized by other tooling and may lose cache state (in the case the oslo.cache connection is the only thing affected by the network partitioning)."

That may be, but I'd rather have an empty but consistent cache than a full but incorrect one. Hopefully network partitions aren't a particularly common occurrence anyway.

So I guess +1 on setting this option.

Colleen Murphy (krinkle) on 2019-03-16
Changed in keystone:
milestone: none → stein-rc1
Colleen Murphy (krinkle) on 2019-03-18
Changed in keystone:
status: New → Triaged
Changed in keystonemiddleware:
status: New → Triaged
Jeremy Stanley (fungi) wrote :

Unless there's a way for a malicious actor to trigger and take advantage of this condition, this is probably a class D (security hardening opportunity) report: https://security.openstack.org/vmt-process.html#incident-report-taxonomy

Changed in ossa:
status: New → Won't Fix
information type: Public Security → Public

Fix proposed to branch: master
Review: https://review.openstack.org/644774

Changed in oslo.cache:
status: New → In Progress
Colleen Murphy (krinkle) on 2019-03-20
Changed in keystone:
milestone: stein-rc1 → stein-rc2
Morgan Fainberg (mdrnstm) wrote :

Keystone is fixed with oslo.cache fix, marked as invalid for keystone

Changed in keystone:
status: Triaged → Invalid
Colleen Murphy (krinkle) on 2019-03-25
Changed in keystone:
milestone: stein-rc2 → none

Reviewed: https://review.openstack.org/644774
Committed: https://git.openstack.org/cgit/openstack/oslo.cache/commit/?id=1192f185a5fd2fa6177655f157146488a3de81d1
Submitter: Zuul
Branch: master

commit 1192f185a5fd2fa6177655f157146488a3de81d1
Author: Morgan Fainberg <email address hidden>
Date: Fri Mar 22 12:35:16 2019 -0700

    Pass `flush_on_reconnect` to memcache pooled backend

    If a memcache server disappears and then reconnects when multiple memcache
    servers are used (specific to the python-memcached based backends) it is
    possible that the server will contain stale data. The default is now to
    supply the ``flush_on_reconnect`` optional argument to the backend. This
    means that when the service connects to a memcache server, it will flush
    all cached data in the server. The pooled backend is more likely to
    run into issues with this as it does not explicitly use a thread.local
    for the client. The non-pooled backend was not touched, it is not
    the recommended production use-case.

    See the help from python-memcached:

        @param flush_on_reconnect: optional flag which prevents a
     scenario that can cause stale data to be read: If there's more
     than one memcached server and the connection to one is
     interrupted, keys that mapped to that server will get
     reassigned to another. If the first server comes back, those
     keys will map to it again. If it still has its data, get()s
     can read stale data that was overwritten on another
     server. This flag is off by default for backwards
     compatibility.

    Change-Id: I3e335261f749ad065e8abe972f4ac476d334e6b3
    closes-bug: #1819957

Changed in oslo.cache:
status: In Progress → Fix Released
To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers