Authorization by user_id does not work in V2.1 API

Bug #1539351 reported by Takashi Natsume
64
This bug affects 10 people
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Won't Fix
Undecided
Unassigned

Bug Description

In case that authorization for deleting a VM instance is done by user_id,
it works fine in V2.0 API, but it does not work in V2.1 API.

[How to reproduce]
In nova policy.json,
Add the following entries(or modify existing entries like the following).

-----------------------------------------------
"user": "user_id:%(user_id)s",
"compute:delete": "rule:user",
"os_compute_api:servers:delete": "rule:user",
-----------------------------------------------

In nova api-paste.ini,
change 'openstack_compute_api_v21_legacy_v2_compatible' to
'openstack_compute_api_legacy_v2' for "/v2" endpoint.

-----------------------------------------------
[composite:osapi_compute]
use = call:nova.api.openstack.urlmap:urlmap_factory
/: oscomputeversions
/v2: openstack_compute_api_legacy_v2
/v2.1: openstack_compute_api_v21
-----------------------------------------------

In V2.0 API, the authorization by 'user_id' works fine.
Only the user who created a VM instance can delete the VM instance.

In V2.1 API, the authorization by 'user_id' does not work.
Any users in the same project can delete the VM instance that another user created.

stack@devstack-master:/opt/devstack$ openstack user list
+----------------------------------+----------+
| ID | Name |
+----------------------------------+----------+
| 1cd4d65d4f534cd89299bbf31edb37a4 | admin |
| 218e7be255be4c90bf0c4d796a9d509c | nova |
| 357fc80d750646f7b3b56fc1e6792222 | demo |
| 37c5204df2d345fb8a76359966dc8d1b | heat |
| 4a6e928a20a743a6a3d80944c607a22a | neutron |
| 8c613c4691e2447e8082f6c425cd34af | glance |
| 9ab80146bc964e81bfcf3331f6b8bb2d | alt_demo |
| ecd940201f5c45a8833bb739149a54f0 | cinder |
+----------------------------------+----------+
stack@devstack-master:/opt/devstack$ openstack project list
+----------------------------------+--------------------+
| ID | Name |
+----------------------------------+--------------------+
| 4b7c129ea5ee49d1a620c26272091ec7 | admin |
| 4c3e76d51a3c4df384c74b8cafb3a9cc | invisible_to_admin |
| 533daaf421554a84aa3b023b4a9c341c | demo |
| b04c7788628849a48b831f5ad57e374a | service |
+----------------------------------+--------------------+
stack@devstack-master:/opt/devstack$ openstack catalog show compute
+-----------+----------------------------------------------------------------------------+
| Field | Value |
+-----------+----------------------------------------------------------------------------+
| endpoints | RegionOne |
| | publicURL: http://10.0.2.15:8774/v2.1/533daaf421554a84aa3b023b4a9c341c |
| | internalURL: http://10.0.2.15:8774/v2.1/533daaf421554a84aa3b023b4a9c341c |
| | adminURL: http://10.0.2.15:8774/v2.1/533daaf421554a84aa3b023b4a9c341c |
| | |
| name | nova |
| type | compute |
+-----------+----------------------------------------------------------------------------+
stack@devstack-master:/opt/devstack$ openstack catalog show compute_legacy
+-----------+--------------------------------------------------------------------------+
| Field | Value |
+-----------+--------------------------------------------------------------------------+
| endpoints | RegionOne |
| | publicURL: http://10.0.2.15:8774/v2/533daaf421554a84aa3b023b4a9c341c |
| | internalURL: http://10.0.2.15:8774/v2/533daaf421554a84aa3b023b4a9c341c |
| | adminURL: http://10.0.2.15:8774/v2/533daaf421554a84aa3b023b4a9c341c |
| | |
| name | nova_legacy |
| type | compute_legacy |
+-----------+--------------------------------------------------------------------------+
stack@devstack-master:/opt/devstack$ nova show server1
+--------------------------------------+----------------------------------------------------------------+
| Property | Value |
+--------------------------------------+----------------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-SRV-ATTR:host | devstack-master |
| OS-EXT-SRV-ATTR:hostname | server1 |
| OS-EXT-SRV-ATTR:hypervisor_hostname | devstack-master |
| OS-EXT-SRV-ATTR:instance_name | instance-00000004 |
| OS-EXT-SRV-ATTR:kernel_id | b0d768cd-3483-4e25-8b9d-9d8863f16502 |
| OS-EXT-SRV-ATTR:launch_index | 0 |
| OS-EXT-SRV-ATTR:ramdisk_id | cacd6bf4-fd74-49b5-9b62-7094d576ea6a |
| OS-EXT-SRV-ATTR:reservation_id | r-workgpr8 |
| OS-EXT-SRV-ATTR:root_device_name | /dev/vda |
| OS-EXT-SRV-ATTR:user_data | - |
| OS-EXT-STS:power_state | 1 |
| OS-EXT-STS:task_state | - |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2016-01-28T06:02:59.000000 |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| config_drive | True |
| created | 2016-01-28T06:02:47Z |
| flavor | m1.tiny (1) |
| hostId | 5084983d07d356ef1de4eacd7a1ad854a39e6f39582715bc9aed7097 |
| id | cb921ee5-07b6-4f2e-b66a-efcc05a74368 |
| image | cirros-0.3.4-x86_64-uec (b44a1bbe-3968-4664-898b-40eb81ce6bd5) |
| key_name | - |
| locked | False |
| metadata | {} |
| name | server1 |
| os-extended-volumes:volumes_attached | [] |
| private network | 10.0.10.6, fd7a:6b74:f7b9:0:f816:3eff:fe14:d99 |
| progress | 0 |
| security_groups | default |
| status | ACTIVE |
| tenant_id | 533daaf421554a84aa3b023b4a9c341c |
| updated | 2016-01-28T06:02:59Z |
| user_id | 357fc80d750646f7b3b56fc1e6792222 |
+--------------------------------------+----------------------------------------------------------------+
stack@devstack-master:/opt/devstack$ nova --service-type compute_legacy --os-user-name alt_demo --os-project-name demo delete server1
Policy doesn't allow compute:delete to be performed. (HTTP 403) (Request-ID: req-cb34aecd-260a-4d50-b481-cd9483ae8745)
ERROR (CommandError): Unable to delete the specified server(s).
stack@devstack-master:/opt/devstack$ nova --service-type compute_legacy --os-user-name demo --os-project-name demo delete server1
Request to delete server server1 has been accepted.

stack@devstack-master:/opt/devstack$ nova show server2
+--------------------------------------+----------------------------------------------------------------+
| Property | Value |
+--------------------------------------+----------------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-SRV-ATTR:host | devstack-master |
| OS-EXT-SRV-ATTR:hostname | server2 |
| OS-EXT-SRV-ATTR:hypervisor_hostname | devstack-master |
| OS-EXT-SRV-ATTR:instance_name | instance-00000006 |
| OS-EXT-SRV-ATTR:kernel_id | b0d768cd-3483-4e25-8b9d-9d8863f16502 |
| OS-EXT-SRV-ATTR:launch_index | 0 |
| OS-EXT-SRV-ATTR:ramdisk_id | cacd6bf4-fd74-49b5-9b62-7094d576ea6a |
| OS-EXT-SRV-ATTR:reservation_id | r-xo3y1bo9 |
| OS-EXT-SRV-ATTR:root_device_name | /dev/vda |
| OS-EXT-SRV-ATTR:user_data | - |
| OS-EXT-STS:power_state | 1 |
| OS-EXT-STS:task_state | - |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2016-01-28T06:06:29.000000 |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| config_drive | True |
| created | 2016-01-28T06:06:18Z |
| flavor | m1.tiny (1) |
| hostId | 5084983d07d356ef1de4eacd7a1ad854a39e6f39582715bc9aed7097 |
| id | c5efae23-b7d6-492c-8a57-578825f8d563 |
| image | cirros-0.3.4-x86_64-uec (b44a1bbe-3968-4664-898b-40eb81ce6bd5) |
| key_name | - |
| locked | False |
| metadata | {} |
| name | server2 |
| os-extended-volumes:volumes_attached | [] |
| private network | 10.0.10.8, fd7a:6b74:f7b9:0:f816:3eff:fe81:2b07 |
| progress | 0 |
| security_groups | default |
| status | ACTIVE |
| tenant_id | 533daaf421554a84aa3b023b4a9c341c |
| updated | 2016-01-28T06:06:29Z |
| user_id | 357fc80d750646f7b3b56fc1e6792222 |
+--------------------------------------+----------------------------------------------------------------+
stack@devstack-master:/opt/devstack$ nova --service-type compute --os-user-name alt_demo --os-project-name demo delete server2
Request to delete server server2 has been accepted.

[Environment]
Ubuntu 14.04 LTS
nova(master, commit 1dfec7186222054c7bc810c9c6894aeac3173321)
novaclient 3.2.0

Tags: api
tags: added: api
Revision history for this message
jichenjc (jichenjc) wrote :

I have similar issue and confirmed this can be reproduced

Changed in nova:
status: New → Confirmed
assignee: nobody → jichenjc (jichenjc)
Revision history for this message
jichenjc (jichenjc) wrote :

when we use v2, following log shows out and prevent additional actions, but in v21 I don't see it

2016-02-05 06:16:44.275 13961 DEBUG nova.policy [req-fb5fea1d-d7d9-497c-8d08-bbd21215ea3f fd45d7605c4a4e90b7ce236773c0ed75 d1c5aa58af6c426492c642eb649017be - - -] Policy check for compute:delete failed with
credentials {'domain': None, 'project_name': u'demo', 'project_domain': None, 'timestamp': '2016-02-05T11:16:44.196011', 'remote_address': '192.168.122.239', 'quota_class': None, 'resource_uuid': None, 'is_admin': False, 'user': u'fd45d7605c4a4e90b7ce236773c0ed75', 'service_catalog': [{u'endpoints': [{u'adminURL': u'http://192.168.122.239:8776/v2/d1c5aa58af6c426492c642eb649017be', u'region': u'RegionOne', u'internalURL': u'http://192.168.122.239:8776/v2/d1c5aa58af6c426492c642eb649017be', u'publicURL': u'http://192.168.122.239:8776/v2/d1c5aa58af6c426492c642eb649017be'}], u'type': u'volumev2', u'name': u'cinderv2'}, {u'endpoints': [{u'adminURL': u'http://192.168.122.239:8776/v1/d1c5aa58af6c426492c642eb649017be', u'region': u'RegionOne', u'internalURL': u'http://192.168.122.239:8776/v1/d1c5aa58af6c426492c642eb649017be', u'publicURL': u'http://192.168.122.239:8776/v1/d1c5aa58af6c426492c642eb649017be'}], u'type': u'volume', u'name': u'cinder'}], 'tenant': u'd1c5aa58af6c426492c642eb649017be', 'read_only': False, 'project_id': u'd1c5aa58af6c426492c642eb649017be', 'user_id': u'fd45d7605c4a4e90b7ce236773c0ed75', 'show_deleted': False, 'roles': [u'anotherrole'], 'user_identity': 'fd45d7605c4a4e90b7ce236773c0ed75 d1c5aa58af6c426492c642eb649017be - - -', 'read_deleted': 'no', 'request_id': 'req-fb5fea1d-d7d9-497c-8d08-bbd21215ea3f', 'instance_lock_checked': False, 'user_domain': None, 'user_name': u'alt_demo'} enforce /opt/stack/nova/nova/policy.py:104

Revision history for this message
jichenjc (jichenjc) wrote :

it's weird .... I changed to
"os_compute_api:servers:delete": "is_admin:True",

then with demo user , I can't delete the instance, seems policy enforce code is correct on this side

Revision history for this message
jichenjc (jichenjc) wrote :

confirmed this is not novaclient issue

jichen@devstack1:~$ export OS_USERNAME=alt_demo
jichen@devstack1:~$ keystone token-get
+-----------+----------------------------------+
| Property | Value |
+-----------+----------------------------------+
| expires | 2016-02-05T13:36:26Z |
| id | a1f32eff6ba0404186273510a848cf1b |
| tenant_id | d1c5aa58af6c426492c642eb649017be |
| user_id | fd45d7605c4a4e90b7ce236773c0ed75 |
+-----------+----------------------------------+

jichen@devstack1:~$ curl -g -i -X DELETE http://192.168.122.239:8774/v2/d1c5aa58af6c426492c642eb649017be/servers/ac3b44ec-d576-4edf-8203-50eadd2ca385 -H "User-Agent: python-novaclient" -H "Accept: application/json" -H "X-OpenStack-Nova-API-Version: 2.6" -H "X-Auth-Token: a1f32eff6ba0404186273510a848cf1b"
HTTP/1.1 403 Forbidden
Content-Length: 95
Content-Type: application/json; charset=UTF-8
X-Compute-Request-Id: req-44541e26-f3fb-480f-9f32-02cafd43ccf5
Date: Fri, 05 Feb 2016 12:37:13 GMT

{"forbidden": {"message": "Policy doesn't allow compute:delete to be performed.", "code": 403}}jichen@devstack1:~$
jichen@devstack1:~$ curl -g -i -X DELETE http://192.168.122.239:8774/v2.1/d1c5aa58af6c426492c642eb649017be/servers/ac3b44ec-d576-4edf-8203-50eadd2ca385 -H "User-Agent: python-novaclient" -H "Accept: application/json" -H "X-OpenStack-Nova-API-Version: 2.6" -H "X-Auth-Token: a1f32eff6ba0404186273510a848cf1b"
HTTP/1.1 204 No Content
Content-Length: 0
Content-Type: application/json
X-Openstack-Nova-Api-Version: 2.6
Vary: X-OpenStack-Nova-API-Version
X-Compute-Request-Id: req-32583396-07f3-4492-95ef-842f55bc4d5d
Date: Fri, 05 Feb 2016 12:37:22 GMT

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to nova (master)

Fix proposed to branch: master
Review: https://review.openstack.org/276738

Changed in nova:
status: Confirmed → In Progress
Revision history for this message
jichenjc (jichenjc) wrote :

tried some commands with following changes in policy.json

"userallow": "user_id:%(user_id)s",

"compute:get": "rule:userallow",
 "compute:pause": "rule:userallow",
 "compute:unpause": "rule:userallow",

    "os_compute_api:servers:show": "rule:userallow",
    "compute_extension:admin_actions:pause": "rule:admin_or_owner",
    "compute_extension:admin_actions:unpause": "rule:admin_or_owner",

pause /unpause can only be executed by owner or admin by default in v2.1

jichen@devstack1:~$ nova --service-type compute_legacy --os-user-name alt_demo --os-project-name demo show ji1
ERROR (Forbidden): Policy doesn't allow compute:get to be performed. (HTTP 403) (Request-ID: req-1c93cd00-1df8-4722-b10f-9fed29536fb6)

jichen@devstack1:~$ nova --service-type compute --os-user-name alt_demo --os-project-name demo show ji1
+--------------------------------------+----------------------------------------------------------------+
| Property | Value |
+--------------------------------------+----------------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-STS:power_state | 3 |
| OS-EXT-STS:task_state | - |

jichen@devstack1:~$ nova --service-type compute --os-user-name alt_demo --os-project-name demo pause ji1
jichen@devstack1:~$

jichen@devstack1:~$ nova --service-type compute_legacy --os-user-name alt_demo --os-project-name demo pause ji1
ERROR (Forbidden): Policy doesn't allow compute:get to be performed. (HTTP 403) (Request-ID: req-63e39575-af0d-4fac-8e43-4ec7e40fc117)
jichen@devstack1:~$

Revision history for this message
jichenjc (jichenjc) wrote :

so a few actions related to in servers should be checked and fixed , will dig more with other objects

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Fix proposed to branch: master
Review: https://review.openstack.org/277433

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Fix proposed to branch: master
Review: https://review.openstack.org/277500

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Fix proposed to branch: master
Review: https://review.openstack.org/277572

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Fix proposed to branch: master
Review: https://review.openstack.org/277797

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on nova (master)

Change abandoned by jichenjc (<email address hidden>) on branch: master
Review: https://review.openstack.org/276738

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Change abandoned by jichenjc (<email address hidden>) on branch: master
Review: https://review.openstack.org/277433

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Change abandoned by jichenjc (<email address hidden>) on branch: master
Review: https://review.openstack.org/277500

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Change abandoned by jichenjc (<email address hidden>) on branch: master
Review: https://review.openstack.org/277572

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

Change abandoned by jichenjc (<email address hidden>) on branch: master
Review: https://review.openstack.org/277797

Revision history for this message
Sean Dague (sdague) wrote :

I think we have now decided the old behavior was never intended, and will not be supported in Nova moving forward. Permission restriction should be by project_id.

Changed in nova:
status: In Progress → Won't Fix
Revision history for this message
Jerome Pansanel (pansanel) wrote :

Dears,

This type of permission restriction is really useful. Is there a way to continue to support it?

Cheers,

Jerome Pansanel

Revision history for this message
Alvaro Lopez (aloga) wrote :

Hi Sean,

I do see several use cases where this is useful, what has motivated this decision?

Revision history for this message
Vincent Legoll (vincent-legoll) wrote :

Hello,

I also think having 2 separate levels where permission restrictions can be enforced is not only helpful, but the only way to stay sane when you have a lot of users / groups.

You'd have to create one tenant per user if you want to emulate the old behaviour, completely negating the usefulness of the tenant concept.

My vote is that this new 'feature' is a regression. The fact that it was not intended does not invalidate the fact that you are now removing something that was useful & also actually used...

Revision history for this message
Vincent Legoll (vincent-legoll) wrote :

This also silently introduce security problems without even warning about the new behaviour.

Please at least add a fat warning when you parse a rule that will be ignored in policy.json

Revision history for this message
Vincent Legoll (vincent-legoll) wrote :

@sdague: re your #17 comment, could you give us a pointer to the discussion where this subject was discussed, thanks.

Revision history for this message
Vin'c (g4-u3uxtu-lr) wrote :
Revision history for this message
jichenjc (jichenjc) wrote :

yeah, I talked to john in IRC again after that, (maybe I remember wrong due to a few month ago) , but at that time he mentioned it's something that is supposed to be done by keystone and there's an idea about policy discovery which means let keystone decide whther the user is eligible to perform the action ,so it's very unlikely nova will take this work
http://lists.openstack.org/pipermail/openstack-dev/2016-February/086118.html is something that more helpful on the conclusion of this issue

and FYI, there are some policy related spec
https://review.openstack.org/#/c/289405/
https://review.openstack.org/#/c/290155/
are the spec related to policy in Newton cycle and well discussed in summit (I didn't attend but I guess so)

Changed in nova:
assignee: jichenjc (jichenjc) → nobody
Revision history for this message
jichenjc (jichenjc) wrote :

I remove my self as it's a won't fix issue

Revision history for this message
Alvaro Lopez (aloga) wrote :

To be honest, I think this should be fixed. OpenStack has completely removed the possibility for an operator to implement authorization to do some operations based on the user_id of the target instance.

First of all, this changes the behavior with the previous version of the API, and it the behavioral change was completely undocumented. Sites relying on it now find their policy complelety broken.

Secondly, IMO operators should have a way to define how they want their clouds to be accessed and managed and OpenStack should not impose a concrete usage model, assuming that this model will fit everybody. Why have we assumed that in every single case the resources are owned by the tenant? IMO this is a simplistic approach, and I do see several use cases where this is particular AuthZ granularity is needed and required.

The policy discoverability (https://review.openstack.org/#/c/289405/) and the embedded defaults (https://review.openstack.org/#/c/290155/) are useful, but if we have removed the user-based permission restriction they are useless in the context of this bug.

Revision history for this message
Alvaro Lopez (aloga) wrote :

Not to mention that the documentation is not coherent with the functionality, for instance http://docs.openstack.org/mitaka/config-reference/policy-json-file.html#examples states the following:

    Rules can compare API attributes to object attributes. For example:
        "compute:start" : "user_id:%(user_id)s"

Revision history for this message
Tim Bell (tim-bell) wrote :

We use the user id policy feature in several large projects at CERN. The typical case is a set of developers (more than 150) who are working for a single team and need the abilities to reboot/restart their own machines. The project admins provide 2nd level support. I believe that the changes described above will break this functionality.

I would propose that until the equivalent function is available in Keystone, this functionality in Nova be retained. Equally, changes such as this in the security area need to be handled very sensitively with the user communities who depend on these functions.

Revision history for this message
Vincent Legoll (vincent-legoll) wrote :

@sdague: or any other member of upstream openstack, care to enlighten us on the solution to this problem ? Or a pointer to the official decision about that matter.

Revision history for this message
Charles Short (cems) wrote :

At EMBL-EBI we are intending to federate our Openstack Liberty Cloud with EGI, and need to implement granular user-based permissions within projects to achieve this. We also have demand within teams internally for this functionality.

Revision history for this message
John Garbutt (johngarbutt) wrote :

So the problem here is our API semantics are such that it assumes all the filters in by project, the list instances, etc.

A user could lock their server to stop other users in the project being able to do things like snapshot or delete it.

For billing you can group projects into nested projects, and extra data at a different level.

I think it was previous assumed you would have a project of shared VMs, and dedicated projects for each user for their own VMs, with billing aggregation to group those as a single billing entity.

Is there a use case we are missing here, or things that don't make the above practical?

Revision history for this message
Ken'ichi Ohmichi (oomichi) wrote :

The comment #28 of Tim seems a good use case for me.

It is easy to handle the use case at a single config of policy.json without another user operation if supporting user-id auth.

In general(I feel), developers tend to name virtual machines temporary with some common names like "dev", "test", "foo" or something. and it is easy to duplicate names between users. Then it would be easy to mis-delete different user's machine in the situation Tim said(150 developers in a single tenant). And the other developer gets angry.. So I am not against having the user-id policy for the use case.

Revision history for this message
John Garbutt (johngarbutt) wrote :

For more details please see this spec:
https://review.openstack.org/#/c/324068/

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers