Deleted user still can delete volumes in Horizon

Bug #1842930 reported by Arthur Nikolayev
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Dashboard (Horizon)
Confirmed
Medium
Unassigned
OpenStack Identity (keystone)
Invalid
Medium
Unassigned
OpenStack Security Advisory
Won't Fix
Undecided
Unassigned
keystonemiddleware
Triaged
Medium
Unassigned

Bug Description

==Problem==
User session in a second browser is not terminated after deleting this user by admin from another browser. User is still able to manage some objects (delete volumes, for example) in a project after being deleted by admin.

==Steps to reproduce==
Install OpenStack following official docs for Stein.
Login as admin to (Horizon) in one browser.
Create a user with role 'member' and assign it to a project.
Open another browser and login as created user.
As admin user delete created user from "first" browser.
Switch to the "second" browser and try to browse through different sections in the dashboard as deleted user -> instances are not shown, but deleted user can list images, volumes, networks. Also this deleted user can delete a volume.

==Expected result==
User session in current browser is closed after user is deleted in another browser.
I tried this in Newton release and it works as expected (for a short time before session is ended, this deleted user can't list object in instances,volumes).

==Environment==
OpenStack Stein
rpm -qa | grep -i stein
centos-release-openstack-stein-1-1.el7.centos.noarch

cat /etc/redhat-release
CentOS Linux release 7.6.1810 (Core)

 rpm -qa | grep -i horizon
python2-django-horizon-15.1.0-1.el7.noarch

rpm -qa | grep -i dashboard
openstack-dashboard-15.1.0-1.el7.noarch
openstack-dashboard-theme-15.1.0-1.el7.noarch

Revision history for this message
Jeremy Stanley (fungi) wrote :

Since this report concerns a possible security risk, an incomplete security advisory task has been added while the core security reviewers for the affected project or projects confirm the bug and discuss the scope of any vulnerability along with potential solutions.

Changed in ossa:
status: New → Incomplete
description: updated
Ivan Kolodyazhny (e0ne)
Changed in horizon:
status: New → Confirmed
importance: Undecided → High
Revision history for this message
Morgan Fainberg (mdrnstm) wrote :

Out of curiosity, are you using caching in any of the following areas:

Horizon/Django
Keystonemiddleware (for any of the services)?

It could be possible the token response has been cached. If that is the case, even if the user is deleted, if a service does not check with keystone, it is impossible to know if the user/data/etc has been deleted.

In the past the behavior I just described was highlighted in the documentation but that information may have disappeared in a number of documentation/guide updates.

Let us know and we can work to further dig into the issue/duplicate/add documentation (depending on what is needed).

Thanks!

Revision history for this message
Akihiro Motoki (amotoki) wrote :

Horizon (openstack_auth module) stores a token in Django session store.
That's the reason that a user can access services even after the user id deleted.
The token stored in Django session store is used to access back-end services.

A user can be deleted via both CLI and Horizon.
(a) If a user is deleted via horizon, horizon can clear a session and the deleted user cannot access back-end services immediately after deleting the user.
(b) If a user is deleted via CLI (or other ways that horizon is not involved), there is no way that horizon can know it, so the deleted user can continue to access services.

Horizon has a setting SESSION_TIMEOUT (which defaults to 3600) [1]. If the mininum time of the keystonemiddleware cache time and the horizon session timeout passes, the deleted user no longer can access services. As of now, keystonemiddleware cache time is 300 sec and horizon's SESSION_TIMEOUT is 3600 sec by default, so the situation reported here continues for 300 sec after the user is deleted.

I think it is reasonable to keep the current behavior from the performance perspective.
As Morgan commented, we can improve our documentation to highlight this behavior clearly.

[1] https://docs.openstack.org/horizon/latest/configuration/settings.html#session-timeout

Revision history for this message
Jeremy Stanley (fungi) wrote :

Arthur: Do Morgan and Akihiro's comments explain the behavior you observed? If so, I think we can switch this bug to public and work on better documenting the token cache control options, to make it clear to deployers and operators how to strike a balance between increased security and increased performance.

Revision history for this message
Arthur Nikolayev (arthur-nik) wrote :

Hello.
Morgan and Akihiro, thank you for your answers.

In this deployment I use Memcached to cache Keystone's Fernet tokens (default configuration). Also I configured default memcached option in Horizon: django.core.cache.backends.memcached.MemcachedCache (as it is suggested in the Stein's installation guide).

It would be great, if you could add some information about caching in docs.

Thank you for pointing the SESSION_TIMEOUT option. I was looking through Horizon options to mitigate this problem and thought about using it.

So is the default keystonemiddleware cache expiration time in such deployment equals to 300 sec? Although I can look up token's expiration time, issuing "openstack token issue" command.

If this is expected behavior, then I don't have any further questions)
Thank you.

Revision history for this message
Arthur Nikolayev (arthur-nik) wrote :

Hi Jeremy.
Yes, their comments helped me understand this situation.

Revision history for this message
Jeremy Stanley (fungi) wrote :

Thanks, I'm marking our security advisory task "won't fix" and lifting the private embargo, treating this as a class D report indicating a need for documentation improvements: https://security.openstack.org/vmt-process.html#incident-report-taxonomy

Changed in ossa:
status: Incomplete → Won't Fix
tags: added: security
description: updated
information type: Private Security → Public
Revision history for this message
Akihiro Motoki (amotoki) wrote :

> Thank you for pointing the SESSION_TIMEOUT option. I was looking through Horizon options to mitigate this problem and thought about using it.

For more detail, there are two options involved in horizon.
- SESSION_TIMEOUT
- SESSION_REFRESH
If SESSION_REFRESH is set to True (current default), a shorter SESSION_TIMEOUT would not matter for most cases.

> So is the default keystonemiddleware cache expiration time in such deployment equals to 300 sec? Although I can look up token's expiration time, issuing "openstack token issue" command.

The default value is defined here [1].
You can test the timeout of keystonemiddlware cache using curl.
[2] is an example in my devstack environment. L.39-49 retrieves a token before the user is deleted and tests it works. L.81 confirms the token is still valid just after the user is deleted. I confirmed the curl command failed with auth error a couple of minuites later (though the paste does not cover it). You can try the similar.

[1] https://opendev.org/openstack/keystonemiddleware/src/branch/master/keystonemiddleware/auth_token/_opts.py#L107-L112
[2] http://paste.openstack.org/show/777693/

Revision history for this message
Akihiro Motoki (amotoki) wrote :

I am adding keystone to the affected projects as I failed to find a good pointer on this behavior in the keystone documentation. If there is a good point in keystone already, it would be appreciated. I will refer to it in the horizon doc.

Changed in horizon:
assignee: nobody → Akihiro Motoki (amotoki)
Revision history for this message
Arthur Nikolayev (arthur-nik) wrote :

Akihiro, thank you for detailed explanation.

Revision history for this message
Matthias Runge (mrunge) wrote :

The same behaviour should be reproducible via command line client.

This issue is inherent if you use tokens, which are valid for some time, and services only validate the token itself, but not (via keystone), if the token has become invalid.

Revision history for this message
Morgan Fainberg (mdrnstm) wrote :

Added Keystonemiddleware and documentation tags. Marked as "medium" importance as it requires documentation changes but is not critical/RC/otherwise impacting. Clear communication of expected behavior is important and should be found in Horizon and Keystonemiddleware's documentation.

I am marking invalid for Keystone itself as keystone will invalidate it's internal cache (barring cases such as in-memory [not production quality] dict-base cache).

tags: added: documentation
Changed in keystone:
status: New → Confirmed
Changed in keystonemiddleware:
status: New → Triaged
Changed in keystone:
status: Confirmed → Triaged
importance: Undecided → Medium
Changed in keystonemiddleware:
importance: Undecided → Medium
Changed in keystone:
status: Triaged → Invalid
Revision history for this message
Akihiro Motoki (amotoki) wrote :

Changing horizon priority to Medium (as the priority in keystone is medium and there is no reason to use higher priority in horizon).

Changed in horizon:
importance: High → Medium
Revision history for this message
Akihiro Motoki (amotoki) wrote :

I am clearing my assignee in horizon as I don't see a progress in the keystone side.

Changed in horizon:
assignee: Akihiro Motoki (amotoki) → nobody
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.