ec2 describe instances does not filter by project_id

Bug #1074343 reported by Jose Castro Leon
264
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
Invalid
Undecided
Unassigned

Bug Description

If you have retrieved via keystone or horizon your ec2 credentials, and try to get the instances you receive a list of all the instances on the controller.

Reproducible: always
1. Download your ec2 credentials
2. euca-describe-instances

Actual Result:
List of all available instances

Expected Result:
List of instances on your specific tenant.

Revision history for this message
Jose Castro Leon (jose-castro-leon) wrote :

If the filter includes project_id in the query, it will filter the output and retrieve only the instances on the specific tenant.

--- nova/api/ec2/cloud.py.orig 2012-10-31 15:58:27.905773055 +0100
+++ nova/api/ec2/cloud.py 2012-10-31 16:37:58.515588746 +0100
@@ -1178,6 +1178,7 @@
             try:
                 # always filter out deleted instances
                 search_opts['deleted'] = False
+ search_opts['project_id'] = context.project_id
                 instances = self.compute_api.get_all(context,
                                                      search_opts=search_opts,
                                                      sort_dir='asc')

Revision history for this message
Jose Castro Leon (jose-castro-leon) wrote :
Revision history for this message
Thierry Carrez (ttx) wrote :

Adding security flag to make sure we clear a potential security impact here.
What version of Nova is this reproduced on ? Is the user special, like a tenant admin ?

information type: Public → Public Security
Changed in nova:
status: New → Incomplete
Revision history for this message
Jose Castro Leon (jose-castro-leon) wrote :

The user is a normal user, and the credentials are scoped to a tenant in keystone.
We are running on Essex, but this code in particular is the same in Folsom and Grizzly.
I have checked it in advance to see if it was applicable.

Revision history for this message
Michael Still (mikal) wrote :

This is interesting, as this works correctly in my test environment. I wonder what's different?

Revision history for this message
Thierry Carrez (ttx) wrote :

Adding Vish to discuss what's happening here

Revision history for this message
Thierry Carrez (ttx) wrote :

Hrm... instance_get_all_by_filters (which gets called ultimately) definitely adds filters['project_id'] = context.project_id, so this should be filtered ok. Asking for more help in channel

Revision history for this message
Thierry Carrez (ttx) wrote :

@Nova-core: please confirm my analysis

Revision history for this message
Sean Dague (sdague) wrote :

Can't confirm on grizzly. Here is the path I'm trying to take to confirm:

install devstack (with 'enable tempest' to get 2 users)

run tempest tests (turns out there ends up being a error vm under id 'demo')

source devstack/eucarc demo demo

euca-describe-instances shows:

RESERVATION r-mvs31w39 6ba446cacc914437b24accd1401d0024 boot_secgroup
INSTANCE i-00000001 ami-00000001 test-vol-instance test-vol-instance error test_key 0 m1.tiny 2012-11-16T16:35:18.000Z nova aki-00000002 ari-00000003 monitoring-disabled instance-store

source devstack/eucarc alt_demo alt_demo

euca-describe-instances shows:

(nothing, no output)

if there is a reason this isn't a sufficient check, I'd like to know what the check should be to easily confirm.

Revision history for this message
Thierry Carrez (ttx) wrote :

@Jose: could you answer Sean's question ?

Revision history for this message
Jose Castro Leon (jose-castro-leon) wrote :

That's the check that I was doing. It could be an Essex bug that you have already fixed in Folsom.

Revision history for this message
Thierry Carrez (ttx) wrote :

@Jose: ok, I guess we need to reproduce on Essex then. What version are you running exactly ? The pure 2012.1 release ? Some point release after that ? Some package from a distro ? The current state of the stable/essex branch ?

Revision history for this message
Jose Castro Leon (jose-castro-leon) wrote :

It's a Scientific Linux CERN distro (RHEL derivative), we took the packages from EPEL
It's running 'openstack-nova-2012.1.1-15.el6'

Revision history for this message
Thierry Carrez (ttx) wrote :

Adding the stable/essex maintenance crew so that they can help reproducing. From just looking at the code Essex should behave correctly, so we need to reproduce.

Changed in nova:
status: Incomplete → New
Changed in nova:
status: New → Invalid
Revision history for this message
Thierry Carrez (ttx) wrote :

Trying to debunk this one again. Cannot be reproduced in Grizzly and Essex uses the same code. My two explanations are: inadvertent use of an admin user, or weird code being used

@Jose: can you confirm that your patch (ec2.patch) allowed you to work around the issue? If yes, could you check the following trail and that the code you execute has this present:

nova/api/ec2/cloud.py: your patch adds search_opts['project_id'] = context.project_id before calling self.compute_api.get_all

nova/compute/api.py: get_all just copies search_opts to filters, calls _get_instances_by_filters, which in turn calls instance_get_all_by_filters

nova/db/sqlalchemy/api.py: if not context.is_admin: [...] filters['project_id'] = context.project_id

So for context.is_admin=False, the patch looks like a noop to me... at least with current stable/essex code.

Sean Dague (sdague)
no longer affects: nova/essex
To post a comment you must log in.
This report contains Public Security information  
Everyone can see this security related information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.