Stale data causes keystone to raise an HTTP 500 during user password updates

Bug #1885753 reported by Lance Bragstad
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Identity (keystone)
Fix Released
Undecided
Lance Bragstad

Bug Description

If two clients attempt to update the same user, they are susceptible to a race condition in the update_user method of the sql driver:

I ran into this by using multiple (16) clients to run a script that updates the password for a user:

#!/bin/bash

set -xo

function cleanup {
    for key in $( set | awk '{FS="="} /^OS_/ {print $1}' ); do unset $key ; done
    exit
}

trap cleanup SIGINT

USERNAME=test-user
while true
do
    PASSWORD=$(uuidgen)
    openstack --os-cloud openstack user set --password $PASSWORD $USERNAME
    if [ $? -ne 0 ]; then
        echo -e "\e[31mFAIL"
        cleanup
    fi
done

Depending on the number of processes and threads, you will see an HTTP 500 with the following error either in keystone's log or http error logs:

[Tue Jun 30 16:29:45.241150 2020] [wsgi:error] [pid 33] [remote 192.168.24.11:56242] File "/usr/lib/python3.6/site-packages/keystone/identity/core.py", line 1182, in update_user
[Tue Jun 30 16:29:45.241152 2020] [wsgi:error] [pid 33] [remote 192.168.24.11:56242] ref = self._update_user_with_federated_objects(user, driver, entity_id)
[Tue Jun 30 16:29:45.241157 2020] [wsgi:error] [pid 33] [remote 192.168.24.11:56242] File "/usr/lib/python3.6/site-packages/keystone/identity/core.py", line 1136, in _update_user_with_federated_objects
[Tue Jun 30 16:29:45.241159 2020] [wsgi:error] [pid 33] [remote 192.168.24.11:56242] user = driver.update_user(entity_id, user)
[Tue Jun 30 16:29:45.241163 2020] [wsgi:error] [pid 33] [remote 192.168.24.11:56242] File "/usr/lib/python3.6/site-packages/keystone/common/sql/core.py", line 518, in wrapper
[Tue Jun 30 16:29:45.241166 2020] [wsgi:error] [pid 33] [remote 192.168.24.11:56242] return method(*args, **kwargs)
[Tue Jun 30 16:29:45.241170 2020] [wsgi:error] [pid 33] [remote 192.168.24.11:56242] File "/usr/lib/python3.6/site-packages/keystone/identity/backends/sql.py", line 239, in update_user
[Tue Jun 30 16:29:45.241172 2020] [wsgi:error] [pid 33] [remote 192.168.24.11:56242] user_ref.to_dict(include_extra_dict=True))
[Tue Jun 30 16:29:45.241176 2020] [wsgi:error] [pid 33] [remote 192.168.24.11:56242] File "/usr/lib64/python3.6/contextlib.py", line 88, in __exit__
[Tue Jun 30 16:29:45.241179 2020] [wsgi:error] [pid 33] [remote 192.168.24.11:56242] next(self.gen)
[Tue Jun 30 16:29:45.241183 2020] [wsgi:error] [pid 33] [remote 192.168.24.11:56242] File "/usr/lib/python3.6/site-packages/oslo_db/sqlalchemy/enginefacade.py", line 1064, in _transaction_scope
[Tue Jun 30 16:29:45.241185 2020] [wsgi:error] [pid 33] [remote 192.168.24.11:56242] yield resource
[Tue Jun 30 16:29:45.241189 2020] [wsgi:error] [pid 33] [remote 192.168.24.11:56242] File "/usr/lib64/python3.6/contextlib.py", line 88, in __exit__
[Tue Jun 30 16:29:45.241192 2020] [wsgi:error] [pid 33] [remote 192.168.24.11:56242] next(self.gen)
[Tue Jun 30 16:29:45.241195 2020] [wsgi:error] [pid 33] [remote 192.168.24.11:56242] File "/usr/lib/python3.6/site-packages/oslo_db/sqlalchemy/enginefacade.py", line 666, in _session
[Tue Jun 30 16:29:45.241198 2020] [wsgi:error] [pid 33] [remote 192.168.24.11:56242] self.session.rollback()
[Tue Jun 30 16:29:45.241202 2020] [wsgi:error] [pid 33] [remote 192.168.24.11:56242] File "/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line 220, in __exit__
[Tue Jun 30 16:29:45.241204 2020] [wsgi:error] [pid 33] [remote 192.168.24.11:56242] self.force_reraise()
[Tue Jun 30 16:29:45.241208 2020] [wsgi:error] [pid 33] [remote 192.168.24.11:56242] File "/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
[Tue Jun 30 16:29:45.241212 2020] [wsgi:error] [pid 33] [remote 192.168.24.11:56242] six.reraise(self.type_, self.value, self.tb)
[Tue Jun 30 16:29:45.241216 2020] [wsgi:error] [pid 33] [remote 192.168.24.11:56242] File "/usr/lib/python3.6/site-packages/six.py", line 703, in reraise
[Tue Jun 30 16:29:45.241219 2020] [wsgi:error] [pid 33] [remote 192.168.24.11:56242] raise value
[Tue Jun 30 16:29:45.241223 2020] [wsgi:error] [pid 33] [remote 192.168.24.11:56242] File "/usr/lib/python3.6/site-packages/oslo_db/sqlalchemy/enginefacade.py", line 663, in _session
[Tue Jun 30 16:29:45.241225 2020] [wsgi:error] [pid 33] [remote 192.168.24.11:56242] self._end_session_transaction(self.session)
[Tue Jun 30 16:29:45.241229 2020] [wsgi:error] [pid 33] [remote 192.168.24.11:56242] File "/usr/lib/python3.6/site-packages/oslo_db/sqlalchemy/enginefacade.py", line 691, in _end_session_transaction
[Tue Jun 30 16:29:45.241232 2020] [wsgi:error] [pid 33] [remote 192.168.24.11:56242] session.commit()
[Tue Jun 30 16:29:45.241236 2020] [wsgi:error] [pid 33] [remote 192.168.24.11:56242] File "/usr/lib64/python3.6/site-packages/sqlalchemy/orm/session.py", line 1026, in commit
[Tue Jun 30 16:29:45.241238 2020] [wsgi:error] [pid 33] [remote 192.168.24.11:56242] self.transaction.commit()
[Tue Jun 30 16:29:45.241242 2020] [wsgi:error] [pid 33] [remote 192.168.24.11:56242] File "/usr/lib64/python3.6/site-packages/sqlalchemy/orm/session.py", line 493, in commit
[Tue Jun 30 16:29:45.241245 2020] [wsgi:error] [pid 33] [remote 192.168.24.11:56242] self._prepare_impl()
[Tue Jun 30 16:29:45.241248 2020] [wsgi:error] [pid 33] [remote 192.168.24.11:56242] File "/usr/lib64/python3.6/site-packages/sqlalchemy/orm/session.py", line 472, in _prepare_impl
[Tue Jun 30 16:29:45.241251 2020] [wsgi:error] [pid 33] [remote 192.168.24.11:56242] self.session.flush()
[Tue Jun 30 16:29:45.241255 2020] [wsgi:error] [pid 33] [remote 192.168.24.11:56242] File "/usr/lib64/python3.6/site-packages/sqlalchemy/orm/session.py", line 2451, in flush
[Tue Jun 30 16:29:45.241257 2020] [wsgi:error] [pid 33] [remote 192.168.24.11:56242] self._flush(objects)
[Tue Jun 30 16:29:45.241261 2020] [wsgi:error] [pid 33] [remote 192.168.24.11:56242] File "/usr/lib64/python3.6/site-packages/sqlalchemy/orm/session.py", line 2589, in _flush
[Tue Jun 30 16:29:45.241264 2020] [wsgi:error] [pid 33] [remote 192.168.24.11:56242] transaction.rollback(_capture_exception=True)
[Tue Jun 30 16:29:45.241267 2020] [wsgi:error] [pid 33] [remote 192.168.24.11:56242] File "/usr/lib64/python3.6/site-packages/sqlalchemy/util/langhelpers.py", line 68, in __exit__
[Tue Jun 30 16:29:45.241270 2020] [wsgi:error] [pid 33] [remote 192.168.24.11:56242] compat.reraise(exc_type, exc_value, exc_tb)
[Tue Jun 30 16:29:45.241274 2020] [wsgi:error] [pid 33] [remote 192.168.24.11:56242] File "/usr/lib64/python3.6/site-packages/sqlalchemy/util/compat.py", line 129, in reraise
[Tue Jun 30 16:29:45.241276 2020] [wsgi:error] [pid 33] [remote 192.168.24.11:56242] raise value
[Tue Jun 30 16:29:45.241280 2020] [wsgi:error] [pid 33] [remote 192.168.24.11:56242] File "/usr/lib64/python3.6/site-packages/sqlalchemy/orm/session.py", line 2549, in _flush
[Tue Jun 30 16:29:45.241283 2020] [wsgi:error] [pid 33] [remote 192.168.24.11:56242] flush_context.execute()
[Tue Jun 30 16:29:45.241286 2020] [wsgi:error] [pid 33] [remote 192.168.24.11:56242] File "/usr/lib64/python3.6/site-packages/sqlalchemy/orm/unitofwork.py", line 422, in execute
[Tue Jun 30 16:29:45.241289 2020] [wsgi:error] [pid 33] [remote 192.168.24.11:56242] rec.execute(self)
[Tue Jun 30 16:29:45.241293 2020] [wsgi:error] [pid 33] [remote 192.168.24.11:56242] File "/usr/lib64/python3.6/site-packages/sqlalchemy/orm/unitofwork.py", line 589, in execute
[Tue Jun 30 16:29:45.241295 2020] [wsgi:error] [pid 33] [remote 192.168.24.11:56242] uow,
[Tue Jun 30 16:29:45.241299 2020] [wsgi:error] [pid 33] [remote 192.168.24.11:56242] File "/usr/lib64/python3.6/site-packages/sqlalchemy/orm/persistence.py", line 236, in save_obj
[Tue Jun 30 16:29:45.241303 2020] [wsgi:error] [pid 33] [remote 192.168.24.11:56242] update,
[Tue Jun 30 16:29:45.241307 2020] [wsgi:error] [pid 33] [remote 192.168.24.11:56242] File "/usr/lib64/python3.6/site-packages/sqlalchemy/orm/persistence.py", line 1011, in _emit_update_statements
[Tue Jun 30 16:29:45.241310 2020] [wsgi:error] [pid 33] [remote 192.168.24.11:56242] % (table.description, len(records), rows)
[Tue Jun 30 16:29:45.241325 2020] [wsgi:error] [pid 33] [remote 192.168.24.11:56242] sqlalchemy.orm.exc.StaleDataError: UPDATE statement on table 'password' expected to update 1 row(s); 0 were matched.

[0] http://paste.openstack.org/show/795389/

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to keystone (master)

Fix proposed to branch: master
Review: https://review.opendev.org/738677

Changed in keystone:
assignee: nobody → Lance Bragstad (lbragstad)
status: New → In Progress
Changed in keystone:
assignee: Lance Bragstad (lbragstad) → Harry Rybacki (hrybacki-h)
Changed in keystone:
assignee: Harry Rybacki (hrybacki-h) → Lance Bragstad (lbragstad)
summary: - Stale data causes keystone to raise an HTTP 500 during update updates
+ Stale data causes keystone to raise an HTTP 500 during user password
+ updates
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/keystone 19.0.0.0rc2

This issue was fixed in the openstack/keystone 19.0.0.0rc2 release candidate.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to keystone (stable/victoria)

Reviewed: https://review.opendev.org/c/openstack/keystone/+/783399
Committed: https://opendev.org/openstack/keystone/commit/5b7d4c80d484262018f937083050844648f07a11
Submitter: "Zuul (22348)"
Branch: stable/victoria

commit 5b7d4c80d484262018f937083050844648f07a11
Author: Lance Bragstad <email address hidden>
Date: Tue Jun 30 11:50:41 2020 -0500

    Retry update_user when sqlalchemy raises StaleDataErrors

    Keystone's update_user() method in the SQL driver processes a lot of
    information about how to update users. This includes evaluating password
    logic and authentication attempts for PSI-DSS. This logic is evaluated
    after keystone pulls the user record from SQL and before it exits the
    context manager, which performs the write.

    When multiple clients are all updating the same user reference, it's
    more likely they will see an HTTP 500 because of race conditions exiting
    the context manager. The HTTP 500 is due to stale data when updating
    password expiration for old passwords, which happens when setting a new
    password for a user.

    This commit attempts to handle that case more gracefully than throwing a
    500 by detecting StaleDataErrors from sqlalchemy and retrying. The
    identity sql backend will retry the request for clients that have
    stale data change from underneath them.

    Change-Id: I75590c20e90170ed862f46f0de7d61c7810b5c90
    Closes-Bug: 1885753
    (cherry picked from commit ceae3566e83b26fd6a1679154eae9b0cef29da64)
    (cherry picked from commit f47e635b8041542faa05e64606e66d2fbbc5f284)

tags: added: in-stable-victoria
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to keystone (stable/ussuri)

Reviewed: https://review.opendev.org/c/openstack/keystone/+/783403
Committed: https://opendev.org/openstack/keystone/commit/07d3a3d3ff534a5295842d4f236042b30536cd82
Submitter: "Zuul (22348)"
Branch: stable/ussuri

commit 07d3a3d3ff534a5295842d4f236042b30536cd82
Author: Lance Bragstad <email address hidden>
Date: Tue Jun 30 11:50:41 2020 -0500

    Retry update_user when sqlalchemy raises StaleDataErrors

    Keystone's update_user() method in the SQL driver processes a lot of
    information about how to update users. This includes evaluating password
    logic and authentication attempts for PSI-DSS. This logic is evaluated
    after keystone pulls the user record from SQL and before it exits the
    context manager, which performs the write.

    When multiple clients are all updating the same user reference, it's
    more likely they will see an HTTP 500 because of race conditions exiting
    the context manager. The HTTP 500 is due to stale data when updating
    password expiration for old passwords, which happens when setting a new
    password for a user.

    This commit attempts to handle that case more gracefully than throwing a
    500 by detecting StaleDataErrors from sqlalchemy and retrying. The
    identity sql backend will retry the request for clients that have
    stale data change from underneath them.

    Change-Id: I75590c20e90170ed862f46f0de7d61c7810b5c90
    Closes-Bug: 1885753
    (cherry picked from commit ceae3566e83b26fd6a1679154eae9b0cef29da64)
    (cherry picked from commit f47e635b8041542faa05e64606e66d2fbbc5f284)
    (cherry picked from commit 5b7d4c80d484262018f937083050844648f07a11)

tags: added: in-stable-ussuri
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to keystone (stable/train)

Reviewed: https://review.opendev.org/c/openstack/keystone/+/783404
Committed: https://opendev.org/openstack/keystone/commit/328cf33aab61775301adbb4c1f6abaa2f331cd94
Submitter: "Zuul (22348)"
Branch: stable/train

commit 328cf33aab61775301adbb4c1f6abaa2f331cd94
Author: Lance Bragstad <email address hidden>
Date: Tue Jun 30 11:50:41 2020 -0500

    Retry update_user when sqlalchemy raises StaleDataErrors

    Keystone's update_user() method in the SQL driver processes a lot of
    information about how to update users. This includes evaluating password
    logic and authentication attempts for PSI-DSS. This logic is evaluated
    after keystone pulls the user record from SQL and before it exits the
    context manager, which performs the write.

    When multiple clients are all updating the same user reference, it's
    more likely they will see an HTTP 500 because of race conditions exiting
    the context manager. The HTTP 500 is due to stale data when updating
    password expiration for old passwords, which happens when setting a new
    password for a user.

    This commit attempts to handle that case more gracefully than throwing a
    500 by detecting StaleDataErrors from sqlalchemy and retrying. The
    identity sql backend will retry the request for clients that have
    stale data change from underneath them.

    Conflicts:
          keystone/tests/unit/test_backend_sql.py due to import order
          differences between train and ussuri. Also adjust the expected log
          message since the method path is different compared to older
          releases, which have the driver name in them (e.g., Identity).

    Change-Id: I75590c20e90170ed862f46f0de7d61c7810b5c90
    Closes-Bug: 1885753
    (cherry picked from commit ceae3566e83b26fd6a1679154eae9b0cef29da64)
    (cherry picked from commit f47e635b8041542faa05e64606e66d2fbbc5f284)
    (cherry picked from commit 5b7d4c80d484262018f937083050844648f07a11)
    (cherry picked from commit 07d3a3d3ff534a5295842d4f236042b30536cd82)
    (cherry picked from commit d4f48fc4e53f71d653e133104854f064fbb1b25f)

tags: added: in-stable-train
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to keystone (stable/stein)

Reviewed: https://review.opendev.org/c/openstack/keystone/+/783405
Committed: https://opendev.org/openstack/keystone/commit/f36034c8a6530b16f6b6eb88ee97f540c974ba00
Submitter: "Zuul (22348)"
Branch: stable/stein

commit f36034c8a6530b16f6b6eb88ee97f540c974ba00
Author: Lance Bragstad <email address hidden>
Date: Tue Jun 30 11:50:41 2020 -0500

    Retry update_user when sqlalchemy raises StaleDataErrors

    Keystone's update_user() method in the SQL driver processes a lot of
    information about how to update users. This includes evaluating password
    logic and authentication attempts for PSI-DSS. This logic is evaluated
    after keystone pulls the user record from SQL and before it exits the
    context manager, which performs the write.

    When multiple clients are all updating the same user reference, it's
    more likely they will see an HTTP 500 because of race conditions exiting
    the context manager. The HTTP 500 is due to stale data when updating
    password expiration for old passwords, which happens when setting a new
    password for a user.

    This commit attempts to handle that case more gracefully than throwing a
    500 by detecting StaleDataErrors from sqlalchemy and retrying. The
    identity sql backend will retry the request for clients that have
    stale data change from underneath them.

    Conflicts:
          keystone/tests/unit/test_backend_sql.py due to import order
          differences between train and ussuri. Also adjust the expected log
          message since the method path is different compared to older
          releases, which have the driver name in them (e.g., Identity).

    Change-Id: I75590c20e90170ed862f46f0de7d61c7810b5c90
    Closes-Bug: 1885753
    (cherry picked from commit ceae3566e83b26fd6a1679154eae9b0cef29da64)
    (cherry picked from commit f47e635b8041542faa05e64606e66d2fbbc5f284)
    (cherry picked from commit 5b7d4c80d484262018f937083050844648f07a11)
    (cherry picked from commit 07d3a3d3ff534a5295842d4f236042b30536cd82)
    (cherry picked from commit d4f48fc4e53f71d653e133104854f064fbb1b25f)
    (cherry picked from commit 328cf33aab61775301adbb4c1f6abaa2f331cd94)

tags: added: in-stable-stein
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/keystone 16.0.2

This issue was fixed in the openstack/keystone 16.0.2 release.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/keystone 17.0.1

This issue was fixed in the openstack/keystone 17.0.1 release.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/keystone 20.0.0.0rc1

This issue was fixed in the openstack/keystone 20.0.0.0rc1 release candidate.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to keystone (stable/rocky)

Reviewed: https://review.opendev.org/c/openstack/keystone/+/783406
Committed: https://opendev.org/openstack/keystone/commit/6e29b6706acf293815ef89ec6f242d4272e841d8
Submitter: "Zuul (22348)"
Branch: stable/rocky

commit 6e29b6706acf293815ef89ec6f242d4272e841d8
Author: Lance Bragstad <email address hidden>
Date: Tue Jun 30 11:50:41 2020 -0500

    Retry update_user when sqlalchemy raises StaleDataErrors

    Keystone's update_user() method in the SQL driver processes a lot of
    information about how to update users. This includes evaluating password
    logic and authentication attempts for PSI-DSS. This logic is evaluated
    after keystone pulls the user record from SQL and before it exits the
    context manager, which performs the write.

    When multiple clients are all updating the same user reference, it's
    more likely they will see an HTTP 500 because of race conditions exiting
    the context manager. The HTTP 500 is due to stale data when updating
    password expiration for old passwords, which happens when setting a new
    password for a user.

    This commit attempts to handle that case more gracefully than throwing a
    500 by detecting StaleDataErrors from sqlalchemy and retrying. The
    identity sql backend will retry the request for clients that have
    stale data change from underneath them.

    Conflicts:
          keystone/tests/unit/test_backend_sql.py due to import order
          differences between train and ussuri. Also adjust the expected log
          message since the method path is different compared to older
          releases, which have the driver name in them (e.g., Identity).

    Change-Id: I75590c20e90170ed862f46f0de7d61c7810b5c90
    Closes-Bug: 1885753
    (cherry picked from commit ceae3566e83b26fd6a1679154eae9b0cef29da64)
    (cherry picked from commit f47e635b8041542faa05e64606e66d2fbbc5f284)
    (cherry picked from commit 5b7d4c80d484262018f937083050844648f07a11)
    (cherry picked from commit 07d3a3d3ff534a5295842d4f236042b30536cd82)
    (cherry picked from commit d4f48fc4e53f71d653e133104854f064fbb1b25f)
    (cherry picked from commit 328cf33aab61775301adbb4c1f6abaa2f331cd94)
    (cherry picked from commit f36034c8a6530b16f6b6eb88ee97f540c974ba00)

tags: added: in-stable-rocky
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to keystone (stable/queens)

Reviewed: https://review.opendev.org/c/openstack/keystone/+/783408
Committed: https://opendev.org/openstack/keystone/commit/ebc3f805f275a267e58e8251757a8dbb211153a0
Submitter: "Zuul (22348)"
Branch: stable/queens

commit ebc3f805f275a267e58e8251757a8dbb211153a0
Author: Lance Bragstad <email address hidden>
Date: Tue Jun 30 11:50:41 2020 -0500

    Retry update_user when sqlalchemy raises StaleDataErrors

    Keystone's update_user() method in the SQL driver processes a lot of
    information about how to update users. This includes evaluating password
    logic and authentication attempts for PSI-DSS. This logic is evaluated
    after keystone pulls the user record from SQL and before it exits the
    context manager, which performs the write.

    When multiple clients are all updating the same user reference, it's
    more likely they will see an HTTP 500 because of race conditions exiting
    the context manager. The HTTP 500 is due to stale data when updating
    password expiration for old passwords, which happens when setting a new
    password for a user.

    This commit attempts to handle that case more gracefully than throwing a
    500 by detecting StaleDataErrors from sqlalchemy and retrying. The
    identity sql backend will retry the request for clients that have
    stale data change from underneath them.

    Conflicts:
          keystone/tests/unit/test_backend_sql.py due to import order
          differences between train and ussuri. Also adjust the expected log
          message since the method path is different compared to older
          releases, which have the driver name in them (e.g., Identity).

    Change-Id: I75590c20e90170ed862f46f0de7d61c7810b5c90
    Closes-Bug: 1885753
    (cherry picked from commit ceae3566e83b26fd6a1679154eae9b0cef29da64)
    (cherry picked from commit f47e635b8041542faa05e64606e66d2fbbc5f284)
    (cherry picked from commit 5b7d4c80d484262018f937083050844648f07a11)
    (cherry picked from commit 07d3a3d3ff534a5295842d4f236042b30536cd82)
    (cherry picked from commit d4f48fc4e53f71d653e133104854f064fbb1b25f)
    (cherry picked from commit 328cf33aab61775301adbb4c1f6abaa2f331cd94)
    (cherry picked from commit f36034c8a6530b16f6b6eb88ee97f540c974ba00)
    (cherry picked from commit e828c6e3bb944721be26443f58074e096d96a651)

tags: added: in-stable-queens
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/keystone 18.1.0

This issue was fixed in the openstack/keystone 18.1.0 release.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/keystone queens-eol

This issue was fixed in the openstack/keystone queens-eol release.

David Wilde (dave-wilde)
Changed in keystone:
status: In Progress → Fix Released
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/keystone rocky-eol

This issue was fixed in the openstack/keystone rocky-eol release.

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix included in openstack/keystone stein-eol

This issue was fixed in the openstack/keystone stein-eol release.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.