Regular user in non-default non-recommended configuration can delete any image file

Bug #1546507 reported by Mike Fedosin
42
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Glance
Confirmed
Critical
Mike Fedosin
Liberty
Confirmed
Critical
Unassigned
Mitaka
Confirmed
Critical
Unassigned
Newton
Confirmed
Critical
Unassigned
Ocata
Confirmed
Critical
Mike Fedosin
OpenStack Security Advisory
Opinion
Undecided
Unassigned
OpenStack Security Notes
New
Undecided
Unassigned

Bug Description

Any user can delete any public image data or get access to private image just knowing the image id.

Glance allows to add custom location to image and this behavior is really harmful.

Scenario of deleting image data in Ceph backend with current devstack configuration

1. User gets list of images:
mfedosin@winter ~ $ glance image-list
+--------------------------------------+----------------------------+
| ID | Name |
+--------------------------------------+----------------------------+
| 0741cbc7-6b9f-4eb4-a666-9743a186849e | debian-8-m-agent.qcow2 |
| 2e4b6dca-9700-4715-b81d-4463cd7038de | TestVM |
| 39599dd3-35cb-4893-b5d4-1a17e23e538a | ubuntu14.04-x64-docker |
| 153397f8-d5e5-43d1-9a08-5fc52bda11a4 | ubuntu14.04-x64-kubernetes |
+--------------------------------------+----------------------------+

2. User requests info about public image he wants to delete:
mfedosin@winter ~ $ glance image-show 2e4b6dca-9700-4715-b81d-4463cd7038de
+------------------+----------------------------------------------------------------------------------+
| Property | Value |
+------------------+----------------------------------------------------------------------------------+
| checksum | ee1eca47dc88f4879d8a229cc70a07c6 |
| container_format | bare |
| created_at | 2016-02-11T03:38:09Z |
| direct_url | rbd://647f7ae8-648a-44f5-83ad-f7bd2299274e/images/2e4b6dca-9700-4715-b81d- |
| | 4463cd7038de/snap |
| disk_format | qcow2 |
| id | 2e4b6dca-9700-4715-b81d-4463cd7038de |
| min_disk | 0 |
| min_ram | 64 |
| name | TestVM |
| owner | 1c6cea59a6054372b10acbab8e25e415 |
| protected | False |
| size | 13287936 |
| status | active |
| tags | [] |
| updated_at | 2016-02-11T03:38:30Z |
| virtual_size | None |
| visibility | public |
+------------------+----------------------------------------------------------------------------------+

Optional: User may try to download image file with "glance image-download 2e4b6dca-9700-4715-b81d-4463cd7038de --file gg"

3. User copies direct image url: from 'direct_url' or 'locations' field
rbd://647f7ae8-648a-44f5-83ad-f7bd2299274e/images/2e4b6dca-9700-4715-b81d-4463cd7038de/snap

4. User creates new image instance in db and sets custom location with "glance --os-image-api-version 1 image-create --location" (v1) or "glance location-add --url" (v2)
mfedosin@winter ~ $ glance --os-image-api-version 1 image-create --location "rbd://647f7ae8-648a-44f5-83ad-f7bd2299274e/images/2e4b6dca-9700-4715-b81d-4463cd7038de/snap" --disk-format qcow2 --container-format bare --name rerere
+------------------+--------------------------------------+
| Property | Value |
+------------------+--------------------------------------+
| checksum | None |
| container_format | bare |
| created_at | 2016-02-17T11:54:41.000000 |
| deleted | False |
| deleted_at | None |
| disk_format | qcow2 |
| id | b12c6965-c6f8-4272-a8a0-453fc0fc03e2 |
| is_public | False |
| min_disk | 0 |
| min_ram | 0 |
| name | rerere |
| owner | fa343a042d2b47cbbeab08cca9913679 |
| protected | False |
| size | 13287936 |
| status | active |
| updated_at | 2016-02-17T11:54:44.000000 |
| virtual_size | None |
+------------------+--------------------------------------+
Optional: User may try to verify that image has desired location
mfedosin@winter ~ $ glance image-show b12c6965-c6f8-4272-a8a0-453fc0fc03e2
+------------------+----------------------------------------------------------------------------------+
| Property | Value |
+------------------+----------------------------------------------------------------------------------+
| checksum | None |
| container_format | bare |
| created_at | 2016-02-17T11:54:41Z |
| direct_url | rbd://647f7ae8-648a-44f5-83ad-f7bd2299274e/images/2e4b6dca-9700-4715-b81d- |
| | 4463cd7038de/snap |
| disk_format | qcow2 |
| id | b12c6965-c6f8-4272-a8a0-453fc0fc03e2 |
| min_disk | 0 |
| min_ram | 0 |
| name | rerere |
| owner | fa343a042d2b47cbbeab08cca9913679 |
| protected | False |
| size | 13287936 |
| status | active |
| tags | [] |
| updated_at | 2016-02-17T11:54:44Z |
| virtual_size | None |
| visibility | private |
+------------------+----------------------------------------------------------------------------------+

5. User deletes his image. Image data will be deleted too.
glance image-delete b12c6965-c6f8-4272-a8a0-453fc0fc03e2
mfedosin@winter ~ $ glance image-delete b12c6965-c6f8-4272-a8a0-453fc0fc03e2
mfedosin@winter ~ $ glance image-show b12c6965-c6f8-4272-a8a0-453fc0fc03e2
404 Not Found: No image found with ID b12c6965-c6f8-4272-a8a0-453fc0fc03e2 (HTTP 404)

6. Trying to access public data will failed after that.
mfedosin@winter ~ $ glance --debug image-download 2e4b6dca-9700-4715-b81d-4463cd7038de --file ggg
curl -g -i -X GET -H 'Accept-Encoding: gzip, deflate' -H 'Accept: */*' -H 'User-Agent: python-glanceclient' -H 'Connection: keep-alive' -H 'X-Auth-Token: {SHA1}49eea3cf13d0aba2b76665245eab8cc45fb08342' -H 'Content-Type: application/octet-stream' http://192.168.0.2:9292/v2/images/2e4b6dca-9700-4715-b81d-4463cd7038de/file

HTTP/1.1 204 No Content
Date: Wed, 17 Feb 2016 12:01:54 GMT
Connection: close
Content-Type: text/html; charset=UTF-8
Content-Length: 0
X-Openstack-Request-Id: req-d77148fb-fd4b-4f7b-a646-30f494c480dd

Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/glanceclient/shell.py", line 605, in main
    args.func(client, args)
  File "/usr/local/lib/python2.7/dist-packages/glanceclient/v2/shell.py", line 281, in do_image_download
    utils.save_image(body, args.file)
  File "/usr/local/lib/python2.7/dist-packages/glanceclient/common/utils.py", line 305, in save_image
    for chunk in data:
  File "/usr/local/lib/python2.7/dist-packages/glanceclient/common/utils.py", line 478, in __iter__
    self.iterable.close()
AttributeError: 'NoneType' object has no attribute 'close'
'NoneType' object has no attribute 'close'

mfedosin@winter ~ $ glance --version
1.2.0

Affected apis:
all v1 api without any chance to fix it - v1 always allows to set custom locations.
v2 api when 'show_multiple_locations' is enabled (default - False)

Affected schemes:
All, except 'swift+config' and 'file', because custom locations are forbidden for them.

If user knows private image id he can build and set custom location to his personal image, therefore get an access to private data.

Tags: security
Revision history for this message
Tristan Cacqueray (tristan-cacqueray) wrote :

Since this report concerns a possible security risk, an incomplete security advisory task has been added while the core security reviewers for the affected project or projects confirm the bug and discuss the scope of any vulnerability along with potential solutions.

description: updated
Changed in ossa:
status: New → Incomplete
Revision history for this message
Tristan Cacqueray (tristan-cacqueray) wrote :

A couple of questions:

* The private image id is only shown for image which have a location (direct_url field) ?
* User can delete any image which have that direct_url, but it doesn't seems to affect image without location right ?

Revision history for this message
Stuart McLaren (stuart-mclaren) wrote :

Thanks Mike.

I'm guessing some non-default configuration options are required to enable this? (eg, show_image_direct_url?)

    cfg.BoolOpt('show_image_direct_url', default=False,
                help=_('Whether to include the backend image storage location '
                       'in image properties. Revealing storage location can '
                       'be a security risk, so use this setting with '
                       'caution!')),

> sets custom location

Does this also need a non-default config option?

    cfg.BoolOpt('show_multiple_locations', default=False,
                help=_('Whether to include the backend image locations '
                       'in image properties. '
                       'For example, if using the file system store a URL of '
                       '"file:///path/to/image" will be returned to the user '
                       'in the \'direct_url\' meta-data field. '
                       'Revealing storage location can '
                       'be a security risk, so use this setting with '
                       'caution! '
                       'Setting this to true overrides the '
                       'show_image_direct_url option.')),

Should we plan to restrict the xxx_image_location policies by default? eg,

    "set_image_location": "",

In your example you're using the rbd store. Is there a set of options which allow using that store safely? eg can show_image_direct_url be set to 'false' and show_multiple_locations be set false for that store?

It seems that this may be worse for some stores than others. Eg if users have configured the swift store the old fashioned way they may get the credentials for the swift single tenant user -- allowing deleting *all* users' images, including private images, and also injecting bad images (though the checksum will provide some protection).

If access to the locations via the Glance API is required for some stores to work, should we consider restricting their display to 'admins' or the image owner by default?

Revision history for this message
Stuart McLaren (stuart-mclaren) wrote :

My understanding (which may not be 100%) is that the rbd location is used by Cinder.

If a user wants to create a new volume from an existing image this can be done in two ways:

1) the image can be streamed as usual
2) if the image backend is rbd, and the location is known, a short cut can be taken: the image bytes don't need to be streamed. Instead a quick clone of the backing volume can be performed.

If the consumer of the location field is typically another OpenStack service (Cinder/whatever) it may be worth considering using Service Tokens here.

We could only expose the location if the request contained a particular role granted by a Service Token. In that way the end user wouldn't see the locations but other OpenStack services could.

Revision history for this message
Mike Fedosin (mfedosin) wrote :

@Tristan
"* The private image id is only shown for image which have a location (direct_url field) ?"
I hope I got you right here... Image id is shown every time. It doesn't matter if image has a location or not.

"* User can delete any image which have that direct_url, but it doesn't seems to affect image without location right ?"
In common case I suppose it's true. But for example for Ceph you can form the url to desired file just as 'rbd://{image_id}'.
So you don't need to see direct url, you can form it from image id.

Revision history for this message
Mike Fedosin (mfedosin) wrote :

@Stuart
As I mentioned, forbid showing direct image url won't help, at least for Ceph. Because you can form it as 'rbd://{image_id}'

Revision history for this message
Stuart McLaren (stuart-mclaren) wrote :

@Mike

The example location you gave is:

 rbd://647f7ae8-648a-44f5-83ad-f7bd2299274e/images/2e4b6dca-9700-4715-b81d-4463cd7038de/snap

'2e4b6dca-9700-4715-b81d-4463cd7038de' is the image id.

What's '647f7ae8-648a-44f5-83ad-f7bd2299274e'? Do you need to guess that as well, or is that something which is also known?

Revision history for this message
Mike Fedosin (mfedosin) wrote :

It's a cluster id, it can be specified, but it's not necessary. Glance store creates these url by default.

"rbd://647f7ae8-648a-44f5-83ad-f7bd2299274e/images/2e4b6dca-9700-4715-b81d-4463cd7038de/snap" and "rbd://2e4b6dca-9700-4715-b81d-4463cd7038de" are equal.

Revision history for this message
Tristan Cacqueray (tristan-cacqueray) wrote :

Thank you Mike for the detailed explanation, I misread the "private image id" as the cluster id.

If I understand correctly, this bug happens when a user creates an image with a location set to another image location.
Can't glance prevents setting a single location to multiple image ?

Revision history for this message
Mike Fedosin (mfedosin) wrote :

Now there is no ability to prevent it. In v2 this behavior is disabled by default, but v1 always allows setting a single location to multiple image.

Today we're discussing about adding uniqueness to location urls in glance db. But it won't help, at least for Ceph backend, because there are several ways to get an object from store (see my post #8). So I found only one easy solution - if url scheme is not 'http(s), then deprecate adding custom location if url doesn't contain original image id. It prevents adding link to external images.

I wrote this code and now I'm testing it and developing tests. I believe I'll upload a patch very soon.

Revision history for this message
Flavio Percoco (flaper87) wrote :

Mike,

Thanks a lot for reporting this issue, which I believe is quite critical.

As Stuart mentioned, exploiting this security issue requires some non default config options to be set. That said, I believe some of our default policies are too permissive. Stuart pointed the `set_image_location` policy and I believe it should be admin only by default.

We could also explore the possibility of not returning the image's location for public images. I believe this could be configured in the policy file.

In addition to the above, I believe we should seriously consider deprecating v1 entirely in N and disabling it by default in O.

I'd like to hear Hermanth thoughts on this as well.

Revision history for this message
Mike Fedosin (mfedosin) wrote :

Thanks for your response Flavio!
"As Stuart mentioned, exploiting this security issue requires some non default config options to be set." It's not correct in common case, because for Ceph you don't need to know direct_url - you can build it as "rbd://{image_id}". So there is no possibility to avoid this bug when v1 is enabled.

I created a fix for that (my god... it's 5AM here) and it works for me. What can you say about this solution?

Revision history for this message
Mike Fedosin (mfedosin) wrote :
Revision history for this message
Stuart McLaren (stuart-mclaren) wrote :

@Mike

Your patch prevents setting arbitrary locations for all users, in all cases, including admins.

Could that be a problem? For example, say an admin has a workflow where they set the location to some pre-existing data.

> v1 always allows setting a single location to multiple image.

My understanding is that you can currently deploy v1 securely (using policies to prevent directly accessing locations), but you may get a performance hit for some operations, eg create image from volume/create volume from image. (But I'm not as familiar with the ceph store as others.)

Would a solution where a regular user can not set a location at all, but an admin or openstack service can set whatever they want be ok?

Revision history for this message
Mike Fedosin (mfedosin) wrote :

It's easy to update the patch and allow admin to set any location he wants. Also we can update it later to allow services to do the same when service token support is added.

If it's only one concern I'll write the code and related tests tomorrow.

Changed in ossa:
status: Incomplete → Confirmed
importance: Undecided → Critical
Revision history for this message
Hemanth Makkapati (hemanth-makkapati) wrote :

I agree with Stuart about the patch being a little too restrictive. I'm sure there must be a decent number of use-cases where setting the location to pre-existing data is needed. So, I'm inclining towards a policy-based solution to restrict any user apart from admin be able to set locations on an image at all.

Revision history for this message
Tristan Cacqueray (tristan-cacqueray) wrote :

Preventing regular user to use the glance image location seems like a regression... would this be accepted for stable branch ?

Revision history for this message
Flavio Percoco (flaper87) wrote :

We're working on a different solution for this. Let's dig a bit more and see what we can do. This is not an easy one.

Revision history for this message
Stuart McLaren (stuart-mclaren) wrote :

> Preventing regular user to use the glance image location seems like a regression

1) This would be a more restrictive default policy rather than anything hard coded.

I've always considered exposing the locations to be exposing some of Glance's internals. (I know there are different opinions on this.)

I think the location stuff was made available for anyone who wanted to switch it on, but we put warnings on the config options -- -- "security risk" (with exclamation points!) -- because anyone who is concerned about security really shouldn't be giving direct access to the locations to regular users. There's just too much scope to do bad things (eg this bug).

We can try to make the location stuff less amazingly insecure, but I think that anyone who has enabled it to date should have been aware of the dangers of enabling it for all users.

Deployments which don't allow direct access to the locations for all users will *always* have a much smaller attack surface.

2) As far as I'm aware the killer use case for allowing access to the locations is for performance wins where Cinder can avoid streaming all the image data.

This doesn't require the end user to access the location, just the Cinder service on behalf of the user. This could be done with service tokens.

Revision history for this message
Mike Fedosin (mfedosin) wrote :

> Deployments which don't allow direct access to the locations for all users will *always* have a much smaller attack surface.

The problem is EVERY deployment allows direct access to the locations for all users with v1's --location. And it's not about policies or config params. It's always enabled, and deprecating --location in v1 is a huge API impact and also it will break Nova, which uses this option.

Using service tokens is a solution, at least for Nova, but it's a feature and we have to think of porting the fix to stable branches. So, to fix it we need:
1. Deprecate v1, declare it insecure and remove from every existing deployments.
2. Add 'service_token' support for Glance and Nova, Cinder, Heat, Ironic...
3. Forbid regular users to set custom locations in v2 API with policies.
Frankly speaking it's rather hard for me...

From my side I suggest an easy, but working solution, that prevents this type of attack - allow regular users setting custom locations only if scheme is 'http(s)' or url contains image id.

Revision history for this message
Mike Fedosin (mfedosin) wrote :

Several changes were made after discussion about this bug:
1. Return the ability to set any locations to admin.
2. Kill image in v1 if location was invalid.
3. Message was updated to more understandable.

Revision history for this message
Flavio Percoco (flaper87) wrote :

Mike,

thanks for the new patch. We're getting closer.

Unfortunately, I don't think the check for valid external URI's is good enough. If I create an empty image and set my own location, which doesn't have the image id in it, I'll still be able to exploit this. Should we check in the DB and see if the location is being used?

Stuart, comments?

Revision history for this message
Stuart McLaren (stuart-mclaren) wrote :

@Mike

> The problem is EVERY deployment allows direct access to the locations for all users with v1's --location. And it's not about policies or config params. It's always enabled

I'm not sure I understand this comment. Reading it at face value, it's saying that literally every existing deployment allows users to directly change locations, and there is nothing (no config change or policy change) that the operator can do to prevent it.

But I find that if I change the following in policy.json

    "delete_image_location": "role:admin or role:glance_admin",
    "get_image_location": "role:admin or role:glance_admin",
    "set_image_location": "role:admin or role:glance_admin",

Then a user is not allowed set the location via the v1 CLI:

 $ glance --os-image-api-version 1 image-create --disk-format raw --container-format bare --name x1 --location http://www.google.com
403 Forbidden: Access was denied to this resource. (HTTP 403)

In my experience it is possible to disallow the direct image location manipulation via policies without breaking nova (again, in my experience, this is what larger deployments will typically do).

> and also it will break Nova, which uses this option.

It may be that Nova can take advantage of directly manipulating locations in some cases (Mike -- do you have more details on what nova does here?) but I don't think directly manipulating the locations is a requirement for a working Nova.

Revision history for this message
Hemanth Makkapati (hemanth-makkapati) wrote :

+1 to Stuart's assessment.

Mike, how does it break nova? From what I understand, Nova doesn't depend on the location stuff to create images. Let me know what I'm missing here.

Revision history for this message
Hemanth Makkapati (hemanth-makkapati) wrote :

Keeping the attack vector aside, is there a fundamental issue here?
As a user, can I shoot myself in the foot by setting the same location to multiple images and when I delete one of these images, the others are now gone too ?

Essentially, should glance start treating the locations that were 'set' differently to the locations that it creates itself?
If a user sets the location on an image, then the data wasn't uploaded through Glance. So, why should Glance delete the data that it did not upload? Can we just 'unset'/soft-delete the location?

Revision history for this message
Hemanth Makkapati (hemanth-makkapati) wrote :

@Flavio,
"If I create an empty image and set my own location, which doesn't have the image id in it, I'll still be able to exploit this."

With Mike's patch, one won't be able to set any location that doesn't have the image id present in it. Well, unless it's HTTP.
If the public image has an HTTP location (and it is exposed), then this exploit is still possible I guess.

So, it looks like this patch essentially reduces the attack vector to a great extent.
Or did I get something wrong? :)

Revision history for this message
Flavio Percoco (flaper87) wrote :

Hermanth,

Good catch,

I think I meant "which does have". That is, if I 'm able to create a fake location with the image id, it'll pass Mike's check and it'll be used. Am I missing something?

Revision history for this message
Grant Murphy (gmurphy) wrote :

First draft at an impact description
--

Title: Unauthorized image deletion in Glance
Reporter: Mike Fedosin (Mirantis)
Products: Glance
Affects: <=2015.1.3, <=11.0.1

Description:
Mike Fedosin from Mirantis reported a vulnerability in Glance that allows any authenticated
user to delete a public image. If a user creates an image with the same custom location as
a public image, the public image data will also be deleted when the user deletes their image.
All setups that allow custom image locations are affected. Glance services using the V2 API
will only be affected when the configuration value show_multiple_locations is set as 'True';
by default this option is not enabled.

Revision history for this message
Tristan Cacqueray (tristan-cacqueray) wrote :

Grant, the affects line needs to be: "<=2015.1.3, >=11.0.0 <= 11.0.1". Otherwise LGTM

Revision history for this message
Mike Fedosin (mfedosin) wrote :

Hey! I was busy last week, so didn't pay to much attention for this bug... I think I should be more precise here.

First of all, that's for sure that operator can disable setting custom locations with policies. And it was the first thing I tried to do. Unfortunately it leads to the fact that several tempest tests fail - all of them from Nova:
tempest.api.compute.images.test_images.ImagesTestJSON.test_create_image_from_stopped_server
tempest.api.compute.images.test_list_image_filters.ListImageFiltersTestJSON
tempest.api.compute.images.test_images_oneserver.ImagesOneServerTestJSON.test_create_delete_image
tempest.api.compute.images.test_images_oneserver.ImagesOneServerTestJSON.test_create_image_specify_multibyte_character_image_name
tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_create_backup
tempest.api.compute.servers.test_delete_server.DeleteServersTestJSON.test_delete_server_while_in_shelved_state
tempest.scenario.test_snapshot_pattern.TestSnapshotPattern.test_snapshot_pattern
tempest.scenario.test_shelve_instance.TestShelveInstance.test_shelve_instance
tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_shelve_shelved_server
tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_shelve_unshelve_server

It made me say that disabling custom locations breaks Nova. All of these tests fail only for Ceph deployments, Swift pass.

My second though was to add requirement of uniqueness for locations, but it doesn't work (at least for ceph), because there're several ways to define url for an image. For example, "rbd://647f7ae8-648a-44f5-83ad-f7bd2299274e/images/2e4b6dca-9700-4715-b81d-4463cd7038de/snap" and "rbd://2e4b6dca-9700-4715-b81d-4463cd7038de" are equal. Adding workarounds for these cases with regexp is not an option, I suppose.

My final thought was to restrict what type of locations user may set - at first it was only http(s), but then I found a pattern, where each glance location contains image identifier. It allowed to smooth the requirement and also permit users to set locations that contain original image id.

Let's see how it works:
1. Assume we have good public image with id "good-public-image". Image file has a link "rbd://good-public-image"
2. Attacker wants to destroy the data and creates his own private image with id "malicious-image".
3. Then he tries to set custom location to his image: "rbd://good-public-image", but it fails, because this url doesn't contain id of his own private image, i.e. "malicious-image".

About the solution - it completely eliminates the possibility of this type of attack, but adds some restrictions for honest users. I think we can consider it as a good workaround, but in Newton we have to define a better one. I suppose we can implement a solution that was proposed for Glare service - define what type of locations (custom or not) we have. You can read about it here https://review.openstack.org/#/c/283136/2/specs/newton/glare-api.rst LL 136-142.

Revision history for this message
Tristan Cacqueray (tristan-cacqueray) wrote :

Mike, that workaround looks good, but are we considering it for stable branch too ?

Also, it seems like you assume glance location always contains image identifier, but is this true also for image location set manually (e.g., when using a direct swift url to a custom file) ?

Revision history for this message
Mike Fedosin (mfedosin) wrote :

Hi! This fix is for stable branches only, because in Newton we will implement it in a right way and finally separate locations in two groups: those which were created by user and those which were created by Glance itself. In first case Glance won't delete files if user deletes image.

Yes, we assume that glance location always contains image identifier - it's not the best approach, but it's lesser of all evils. Frankly speaking in my opinion allowing users to set custom locations is very insecure thing - Glance works with admin privileges and it can read or delete any data in storage. But in my solution if user (not admin) really needs to upload any file directly without Glance and set the location to his image, he is able to do it.
He should perform next steps: 1. Create image instance in DB and get its ID. 2. Then upload file directly to store with the same name as the ID. 3. And finally set custom location to his image.
And I'm going to say it again - other workflows are really insecure and must be rejected, setting custom locations for any existing files may lead to the fact that malicious user may read or delete some sensitive data, and it's not necessary to be an image file.

Revision history for this message
Tristan Cacqueray (tristan-cacqueray) wrote :

Thank you Mike for the extra details. So basically the workaround seems to add a new requirement to location uri: they now must contains image uuid.

Should we subscribe Tony Breeds as a Stable branch maintainer to check if this workflow/behavior change is acceptable ?

Revision history for this message
Mike Fedosin (mfedosin) wrote :

One addition: the requirement is "they now must contain the original image uuid" and this requirement does not apply to cloud administrators and 'http' locations.

More feedback is good, so yeah - invite Tony Breeds here :)

Revision history for this message
Tristan Cacqueray (tristan-cacqueray) wrote :

Tony, here we are considering removing the ability to use url without image uuid for image location. Since it's a behavior change, we wonder if that's an acceptable evil (stable branch wise) to workaround above issue. What do you think ?

Revision history for this message
Stuart McLaren (stuart-mclaren) wrote :

If I'm reading the patch correctly, we're hardcoding that only an admin can set arbitrary locations:

 if not(req.context.is_admin or utils.validate_external_url

Previously it was possible to use the policy file to give permissions to just specific roles/users to set arbitrary locations, eg:

 policy.json:
    "set_image_location": "role:whoever",

It may be a corner case, but if someone has a workflow that requires a non-admin (but hopefully trusted user) to set an arbitrary location the current patch would prevent that.

Could we do something like have a new policy that would cover that case, eg:

 "set_unrestricted_location": "role:whoever"

That would give us three policy defined sets:

1. Users that can't set locations at all
2. Users that can set locations to allowed values only
3. Users that can set location to any value

Revision history for this message
Mike Fedosin (mfedosin) wrote :

Good suggestion Stuart. I'll update the patch today.

Revision history for this message
Nikhil Komawar (nikhil-komawar) wrote :

Hi, I just read through the bug (as I was not involved earlier). I think we need to define what the exact workflow is being proposed here.

It seems like we need to clarify in one complete outline of the proposal the following things:

1. two non-default config options need to be enabled for v2, single location can always be set for v1?

2. some policy changes in the source tree that will restrict access to regular users (including project admins?) (for both v1 and v2?)

3. Nova uses (v1) direct access to locations in libvirt driver ( https://github.com/openstack/nova/blob/824c3706a3ea691781f4fcc4453881517a9e1c55/nova/virt/libvirt/imagebackend.py#L967 and https://github.com/openstack/nova/blob/824c3706a3ea691781f4fcc4453881517a9e1c55/nova/virt/libvirt/driver.py#L1517 )
Is setting default policy to be restrictive going to break this for those users?

4. The primary use case of image-locations is for operators to specify fully customized (with glance's location comprehension constraints) for being able to specify those that are near to some services like nova (using v2). This is to boost boot times and reduce network lags and errors. Will the admins be able to set the image location as they find convenient?

5. What will happen to existing images in the DB that were stored by specifying a image location that contains identifier that's not image id?

Revision history for this message
Tristan Cacqueray (tristan-cacqueray) wrote :

Based on above comment and other related issue (bug 1549483 and bug 1555590), I'd like to handle image-location issues as class B2 type of bug according to VMT taxonomy ( http://security.openstack.org/vmt-process.html#incident-report-taxonomy ).

I'm switching the OSSA task status to opinion until a Security Note (OSSN) is defined.
However if a backportable fix can be implemented, then we can revert that decision and issue an advisory.

Changed in ossa:
status: Confirmed → Opinion
Revision history for this message
Mike Fedosin (mfedosin) wrote :

Now operators are able to specify with policy what users can set any locations.

Revision history for this message
Mike Fedosin (mfedosin) wrote :

@Nikhil
1. Yes, it's correct, but as you may know setting custom location for image is a standard use case for Glance. Just because v2 doesn't support it by default doesn't mean that no one uses this feature - they just use glance v1.

2. Done (see 3rd PS)

3. Ouch... It's bad :( In that case we have to allow this user set any locations with policies. But fmpov it's better to rewrite nova code and won't do it any longer.

4. Yes, by default it's enabled.

5. Unfortunately nothing, they will be there after this patch.

Revision history for this message
Nikhil Komawar (nikhil-komawar) wrote :

Thanks for the clarification Mike. This really helps.

I browsed through the patch, I do have a question regarding the direct snapshot method in Nova for RBD pool. I think with this change they will need to enable the unrestricted location access, which I think we can avoid by adding a logic in the policy check that identifies if a http url or image_id or the rbd style "snap" is present in the location url. Also, it would be nice to have a note above that logic explaining why only these are being considered in the if case.

Make sense?

Revision history for this message
Mike Fedosin (mfedosin) wrote :

Nikhil, I made required changes. Please take a look - now snap-style urls are considered fine.

Revision history for this message
Mike Fedosin (mfedosin) wrote :
Revision history for this message
Nikhil Komawar (nikhil-komawar) wrote :

@Mike: we are almost there. You just need to make sure that the commit message is updated to indicate image_id, http and snap style urls. Currently it mentions only image_id and http.

Also, the patch by itself looks good. I will ask if someone has b/w to test this on devstack.

Revision history for this message
Nikhil Komawar (nikhil-komawar) wrote :

I'm adding Nova coresec team to the subscribers' list to help tie the ramifications of this bug with the ongoing Glance v2 work.

Revision history for this message
Andrew Laski (alaski) wrote :

I do not see that this has anything to do with the Glance v1->v2 work in Nova. Nova does not allow creating images with custom locations or updating an image location so it is not an attack vector here.

Nova is dependent on Glance v1 currently and is a reason that it can't be thrown in a fire and this vulnerability ignored, but it is not the only reason that Glance v1 still exists and should be fixed.

Revision history for this message
Nikhil Komawar (nikhil-komawar) wrote :

Hi Andrew,

Thanks for the input. Currently, Nova is relying on custom location logic for Ceph specific snapshots https://github.com/openstack/nova/blob/824c3706a3ea691781f4fcc4453881517a9e1c55/nova/virt/libvirt/driver.py#L1517 .

As a part of the v2 transition, multiple locations need to be enabled https://review.openstack.org/#/c/301741/8/specs/newton/approved/use-glance-v2-api.rst@260 .

So, the reason why this becomes relevant is: If Glance deployments are expected to allow setting multiple location on such drivers, we need to communicate to the operators on the right strategy to be used when doing so. Mentioning the severity of risk is important as well.

Revision history for this message
Andrew Laski (alaski) wrote :

Nikhil,

Nova does rely on custom location logic for a specific feature, but it does not allow setting of custom locations so it is not an attack vector here.

I'm trying to understand what involvement the Nova team should have here and as far as I can tell there is nothing to be done.

Revision history for this message
Nikhil Komawar (nikhil-komawar) wrote :

Thanks for your feedback Andrew. And thanks for asking that question.

(Just to clarify your statement for others too -- Nova proxy Images API does not allow setting custom location for the end user.)

I've added the nova-core-sec to this bug after some discussion initiated by Mike Fedosin, questioning if is Glance-v2 is production ready or not. Hence, this is related in a way that if Nova defaults to using Glance-v2, operators would be expected to deploy Glance-v2 as well. If that happens, what means for deployments that require multiple locations related features to expose Glance-v2 to end users.

Image immutability/stability promise is broken with this bug, and the following bugs affect too [1], [2].

So, Mike has propose the following for Nova v1, v2 work:

* Add experimental support of glance v2 in Nova, but send OSSN where it is recommended not to use it until all issues are fixed.

[1] https://bugs.launchpad.net/glance/+bug/1549483
[2] https://bugs.launchpad.net/glance/+bug/1555590

Revision history for this message
Nikhil Komawar (nikhil-komawar) wrote :
Download full text (8.9 KiB)

Related content to image-location bugs in his emails is as follows:
###################################################################

---

Adoption of glance v2 in Nova requires to allow users to set custom locations for their images to be able to make snapshots on deployments with Ceph. https://github.com/openstack/nova/blob/824c3706a3ea691781f4fcc4453881517a9e1c55/nova/virt/libvirt/driver.py#L1517
Glance v2 implementation of custom locations has security threat and it's not recommended to use by anyone except administrators, because it allows users to replace location of their active images. (Bug https://bugs.launchpad.net/glance/+bug/1549483) Example:

mfedosin@wdev:~$ glance image-create --name good --disk-format qcow2 --container-format bare --visibility public
+------------------+--------------------------------------+
| Property | Value |
+------------------+--------------------------------------+
| checksum | None |
| container_format | bare |
| created_at | 2015-11-10T18:41:53Z |
| disk_format | qcow2 |
| id | 2a745d21-66b7-43e0-90b5-ebe62232f7d6 |
| locations | [] |
| min_disk | 0 |
| min_ram | 0 |
| name | good |
| owner | f3b42d4b90d840b8806e46fb4a7edca3 |
| protected | False |
| size | None |
| status | queued |
| tags | [] |
| updated_at | 2015-11-10T18:41:53Z |
| virtual_size | None |
| visibility | public |
+------------------+--------------------------------------+
mfedosin@wdev:~$ glance location-add 2a745d21-66b7-43e0-90b5-ebe62232f7d6 --url 'https://dl.dropboxusercontent.com/u/13626875/good.txt'
+------------------+----------------------------------------------------------------------------------+
| Property | Value |
+------------------+----------------------------------------------------------------------------------+
| checksum | None |
| container_format | bare |
| created_at | 2015-11-10T18:41:53Z |
| disk_format | qcow2 |
| file | /v2/images/2a745d21-66b7-43e0-90b5-ebe62232f7d6/file |
| id | 2a745d21-66b7-43e0-90b5-ebe62232f7d6 |
| locations | [{"url": "https://dl.dropboxusercontent.com/u/13626875/good.txt", "metadata": |
| | {}}] |
| min_disk | 0 |
| min_ram | 0 |
| name | good |
| owner | f3b42d4b90d840b8806e46fb4a7edca3 |
| protected | False |
| schema | /v2/schemas/image |
| size | 43 |
| status | active |
| tags | [] |
| updated_at | 2015-11-10T18:42:21Z |
| virtual_size | None |
| visibility | public |
+------------------+----------------------------------------------------------------------------------+
mfedosin@wdev:~$ glance image-download 2a745d21-66b7-43e0-90b5-ebe62232f7d6 --file ooo
mfedosin@wdev:~$ cat ooo
I'm really good image.
mfedosin@wdev:~$ glance location-add 2a745d21-66b7-43e0-90b5-ebe62232f7d6 --url 'https://dl.dropboxusercontent.com/u/13626875/bad.txt'
+------------------+----------------------------------------------------------------------------------+
| Property | Value |
+------------------+----------------------------------------------------------------------------------+
| checksum | None |
| container_format | bare |
| created_at | 2015-11-10T18:41:53Z |
| disk_format | qcow2 |
| file | /v2/images/2a745d21-66b7-43e0-90b5-ebe62232f7d6/file |
| id | 2a745d21-66b7-43e0-90b5-ebe62232f7d...

Read more...

Revision history for this message
Nikhil Komawar (nikhil-komawar) wrote :

I also wanted to add some related comments from my side:

* I do not think these issues should affect Glance v2's ready status.

* image-locations were always designed to be admin oriented as it's impossible for an end user to determine which all types of location schema can be supplied to glance to let it comprehend the same.

* There is no discovery added to it.

* The use case of image-locations has been for operator to specify the most optimal location to pull image from -- say if it's a massively distributed cloud and Nova or like services need to be aware the best location to use data from.

* The default configuration results to 'False' or disallowed usage of this feature.

* The feature in Nova (rbd snapshots) use it but the testing is being done on a separate gate job and the default (glance) pull (say on devstack) won't enable the operator to use glance with it.

* Currently, the documentation suggests only direct_url config. Once the glance-v2 is used by default, multiple locations would need to be enabled.

* The security impact of the feature would need to be updated to reflect the same and the severity of the impact may increase http://specs.openstack.org/openstack/nova-specs/specs/mitaka/implemented/rbd-instance-snapshots.html#security-impact

* Nevertheless, the effort to port to Glance v2 should not be halted or changed. Glance v2 is on way to be the default version and we need to work around proper documentation or best practices. Some of this information can go in operator manual where the ops would need to carefully deploy private glance nodes to enable multiple locations for ceph case -- but all that is a non-standard anyway.

Revision history for this message
Nikhil Komawar (nikhil-komawar) wrote :

just updating my previous comment here (see comment #45):

@Mike: You just need to make sure that the commit message is updated to indicate image_id, http and snap style urls. Currently it mentions only image_id and http.

I think the patch by itself looks good.

thoughts?

Revision history for this message
Ian Cordasco (icordasc) wrote :

I've been added to review the patch. Has anyone verified this doesn't happen with a VMWare setup? Does anyone even know what a VMWare URL would look like/how this would behave with locations?

Revision history for this message
Nikhil Komawar (nikhil-komawar) wrote :

Good point Ian. I guess we need to involve our glance-core and VMware driver maintainer Sabari in this bug too to further evaluate the scope of this locations issue.

I would still prefer to keep this private as it is a potentially dangerous issue for some cases and is affecting stable branches and thus any live deployments on them. Next step: let's get some initial feedback from Sabari on VMware driver.

Revision history for this message
Nikhil Komawar (nikhil-komawar) wrote :

@Sabari: any updates for us?

Revision history for this message
Ian Cordasco (icordasc) wrote :

So we've been waiting on Sabari for over a month. At this point, I think we should just move this forward.

Revision history for this message
Erno Kuvaja (jokke) wrote :

I had read through of this bug and the patch. One thing that is not clear to me is why do we do guess work what the image location path can or cannot be and instead just check that with any RW store we do not allow setting location we a) do not have access (prevents any possibility to guess what would be the file name in future) and b) is not in our locations table already? This would also solve any potential issue with the VMWare driver regardless if we reach Sabari for his expertise or not.

Also I'd prefer fixing this in master and all stable branches for now and we can alter master then after we get service tokens into use. For now I think it's more important that we provide patch to all consumers who are using for example Nova+rbd or Cinder+rbd at the time when this becomes public rather than go with a statement "We know this is wrong but didn't bother to fix it properly".

My biggest worry is that even exploiting this needs non-default config settings, those are mandatory for the rdb drivers and based on the latest survey that's >50% of the deployments.

Revision history for this message
Erno Kuvaja (jokke) wrote :

Amend around/to the point 'b)' above "or b) is in our locations table already".

Revision history for this message
Brian Rosmaita (brian-rosmaita) wrote :

Added Fei Long to the bug. He's running into this in his cloud, will be helpful for assessing/testing the fix.

Revision history for this message
Brian Rosmaita (brian-rosmaita) wrote :

Sent Sabari (vmware driver maintainer) a reminder email to look at this. He's on PDT, so we may not hear back until later today. (I'd like to make this one last attempt to determine the vmware store situation before we move forward, otherwise I agree with Ian that this has sat too long.)

Will email Mike now to ask him to reply to Erno's questions in comment #60.

Revision history for this message
Feilong Wang (flwang) wrote :

Is the patch on gerrit now? I reviewed the latest patch and at least 2 points we need to improve:

1. About check "if url.startswith('http') or image_id in url:", unfortunately, we can't check if image_id in url, since it's easy to by pass. Because in Glance, user can specify image id, as a result, user can easily take the RBD cluster ID as his image id to by pass this check.

2. Currently, except http and filesystem(these two are OK for this case), we also support Cinder, RBD, sheepdog and vmware driver. We need to involve all the maintainers for each driver. Unfortunately, as the maintainer of RBD, I wasn't involved at the first time or I missed something. So my suggestion is, we should involve all the other maintainers to make sure the fix is good for all the drivers.

Revision history for this message
Feilong Wang (flwang) wrote :

Now I'm going to repeat Erno's comments.

We have used also 9 months to find a perfect solution, and unfortunately, we're still in progress. That said, we release Mitaka and Newton with a known critical bug. That's fine for master branch, BUT it's not good for all the impacted stable branches. I would suggest working out a fix ASAP based on currently discussion, which maybe not perfect, but can give those cloud providers peace of mind. And put more effort on service token to get a better solution for this.

Revision history for this message
Feilong Wang (flwang) wrote :

Another thought about the fix is, we need to fix this from at least two sides, nonevent and incurred :

1. Nonevent: Avoid adding/updating location like this, which is covered by Mike's patch.

2. Incurred: Avoid deleting the original image data if there are already images created based on the other images' locations. This part is missing in Mike's patch.

If Mike's busy recently, as the maintainer of Glance RBD driver, I'm happy and keen to work on this, since our cloud is using RBD(unfortunately :( at this case).

Revision history for this message
Brian Rosmaita (brian-rosmaita) wrote :

My opinion is that we need to move on this. As Erno pointed out, even though exploiting this vulnerability requires the use of non-default config option settings, you can't efficiently use RBD unless you use the problematic setings. Further, there are a large number of deployments using RBD and hence vulnerable to this.

I agree with Erno and Fei Long about needing to handle the "incurred" locations, because it's possible that someone could do this without bad intent (for example, if an end user wanted a more descriptive name or more detailed metadata on a public image, the end user might create a new image with all the detailed info and set the location to the public image data). So there could be a bunch of these things sitting around in a deployment, and they won't be noticed until an end user deletes one because s/he doesn't need it anymore.

Two questions:
(1) For glance coresec members: Please review my assessment of the affected stores and indicate what unknown info you can fill in. Please respond within 24 hours, that is, by 22:00 UTC tomorrow, Tuesday 8 November 2016.

(2) For OpenStack VMT: I need some advice about how to deal with this situation, that is, the bug may apply to more stores than just RBD. For example, efficient use of the Cinder backend would entail using the same non-default config options. The other issue is whether the proposed fix would break common non-default deployments of the other stores. My questions are (a) is it OK to add all driver maintainers to this bug (see http://docs.openstack.org/developer/glance_store/drivers/ for the list; we're talking 3 people not already on the bug), and (b) what do we do if the other driver maintainers don't respond in a timely manner? (I'm worried about releasing a fix for RBD that broadcasts knowledge of the vulnerability for the other unfixed backends.)

Store: Status, Is a Test Backend Available?
Filesystem: not affected (can't set location?), yes (devstack)
HTTP: not affected (HTTP store is read-only), N/A
RBD: affected in common non-default configuration, unknown
Cinder: affected if common non-default RBD configuration is used, yes (devstack)
Swift: could be affected if you exposed image locations (though there is no reason to expose them, in fact it's a really bad idea), yes (devstack)
VMware: unknown, unknown
Sheepdog: unknown, unknown

Revision history for this message
Erno Kuvaja (jokke) wrote :

@brian-rosmaita Ceph is FOSS so yes rbd backends are available if you want to set one up and target for example devstack against it. TripleO can also automate this on deployment and allows to do it on virtual infrastructure. I will kick up test system when we have patch available.

Revision history for this message
Erno Kuvaja (jokke) wrote :

I'm sorry to say but IMO 2 months of no response to a security bug from Sabari indicates that we do not have maintainer for our VMWare driver and need to flag this to the community to see if we will find a new maintainer for it If not, we should deprecate the driver for removal.

Revision history for this message
Ian Cordasco (icordasc) wrote :

Erno, let's discuss that separately from this bug. Once this patch has been approved, let's start that discussion on the mailing list. For now, let's keep the discussion here focused on Fei's testing and verification on other stores before we lose track of the important parts of this discussion.

Revision history for this message
Brian Rosmaita (brian-rosmaita) wrote :

I completely forgot that Sabari has been on vacation (took some time before and after the summit). I got an email from him a few minutes ago, he just got back today, and will take a look at this before the end of day (PST).

Revision history for this message
Sabari Murugesan (smurugesan) wrote :

On the VMware store, let me provide a quick update.

The Location URI looks like the following and contains the image_id :-
vsphere://192.168.1.5/folder/images/37324224-7c96-4b7f-9518-dd0a83cbb31a?dcPath=default&dsName=NFS"

As you can see, you can form the URI for any other image from this information.

Hence, the store is affected if you expose locations to end users which I believe is a non default configuration for glance-api. let me know if I need to answer any other question specifically.

Revision history for this message
Sabari Murugesan (smurugesan) wrote :

Just to be clear, let me rephrase my update :-

In a setup where the glance-api node handles nova/service interactions and end-user requests, if show_multiple_locations=false (the default) and show_direct_url=false (the default) and the set_image_location policy is admin only (default since Newton); then glance will be able to interact with the vmware datastore correctly, both for snapshots and regular end-user image stuff, and the store is not exposed to this bug.

For the non-default configuration case, I looked at the patch from Mike and IIUC, it verifies that the location URL being added contains the image_id of the image being updated. This patch will work for the VMware driver and no special handling needs to be done as in the case of RBD driver.

Revision history for this message
Ian Cordasco (icordasc) wrote :

Thank you Sabari!

Revision history for this message
Jeremy Stanley (fungi) wrote :

Responding to Brian's specific questions for the VMT in comment #67:

2.a. We favor looping in whoever you need to get embargoed fixes written and reviewed. By all means, if you prefer to get feedback or assistance from your driver maintainers on the issue, subscribe them to this bug report as soon as possible.

2.b. If specific drivers remain unfixed in a timely manner we can mention them in the advisory as affected but unfixed, though you'll have a hard decision ahead of you as a project regarding whether you can safely continue to support drivers which lack fixes for known vulnerabilities (and longer term if you're unable to do something about it, the VMT may be unable to support Glance as a whole for similar reasons).

Revision history for this message
Brian Rosmaita (brian-rosmaita) wrote :

Adding the remaining driver maintainers:

Matthew Oliver (Swift)
Tomoki Sekiyama (Cinder)
Yamada Hideki (Sheepdog)

Matthew: I believe that Swift could be vulnerable, but only if a deployer used non-recommended, non-default configuration options. (Please confirm/deny.) My primary reason for adding you here is to make sure that the proposed fix won't somehow cause a problem for "normal" Swift deployments.

Tomoki: I believe that a deployer using the Cinder backend who wanted to take advantage of cinder-side quick snapshotting would have to use the same non-default configuration options as the Ceph backend. (1) Would that make Cinder vulnerable to this exploit, and if so, will the proposed fix work for Cinder? (2) Please monitor the proposed fix to make sure it either works for Cinder (if Cinder is vulnerable) or won't cause a problem (if Cinder isn't vulnerable).

Yamada: I don't know anything about the Sheepdog backend, can you indicate whether Sheepdog is vulnerable to this exploit, and if so, whether the proposed fix will work? In either case, please continue to monitor the proposed fix to either make sure it works or make sure it won't cause a regression that breaks the Sheepdog backend.

Thank you!

Revision history for this message
Ian Cordasco (icordasc) wrote :

Just to remind everyone, the Ocata-1 release deadline is Nov 14-18 and Glance would like to have a release on the 17th. As release liaison and both a stable-maint and security core, I'd appreciate people's help getting this bug squashed but I think we also need to accept the fact that this won't make it into O-1. What we need to do is determine if we think we can deliver this in within a week from then so we could potentially delay a stable release until this can be included.

summary: - Regular user can delete any image file
+ Regular user in non-default non-recommended configuration can delete any
+ image file
Revision history for this message
YAMADA Hideki (yamada-h) wrote :

Brian: I understood what happend. I will check it soon.

Revision history for this message
Tomoki Sekiyama (tsekiyama) wrote :

Brian, Cinder store is also vulnerable to this expoloit, when it is configured to store image volumes in the shared internal tenant. (It has another mode to store the volumes in user's tenant, it is not vulnerable because other tenant users cannot touch the image volume).

And, Cinder store uses an url like 'cinder://<cinder-volume-uuid>' which does not include image-uuid, so the current proposed patch does not solve the issue. The image volumes stored in the shared internal tenant has the owner information in volume metadata, so we need to check if the volume is owned by the current user before adding its location to an image.
The attached patch does the check.

Revision history for this message
Brian Rosmaita (brian-rosmaita) wrote :

Tomoki, a few comments about your patch. These might all be OK, just some things I noticed.

(1) previously, if _check_context() raised glance_store.exceptions.BadStoreConfiguration, the code caught it, logged it, and get_size() returned 0. The patch doesn't catch this exception any more.

(2) the patch raises glance_store.exceptions.BackendException -- I'm worried that this is going to be a 500 somewhere up the call chain (which is better than allowing a security exploit, but something that translated to a 403 might be nicer?) I haven't done much work on glance_store, so if the BackendException is more consistent with what the other drivers do, that's fine -- just thought I'd ask.

Revision history for this message
Brian Rosmaita (brian-rosmaita) wrote :

Mike, how are things looking for you this week? Do you have time to revise your patch, or should Fei Long pick it up? See Ian's comment above (#77) about the timing ... we need to move forward on this issue this week. Thanks!

Revision history for this message
Feilong Wang (flwang) wrote :

Today I just got sometime to work on the 'incurred' case, could you please review it? Hope this can help moving this bug faster.

Revision history for this message
Tomoki Sekiyama (tsekiyama) wrote :

Brian, thanks for your comments. I have made this more straightforward --

(1) Now it is not changing current behaviour on _check_context() raised glance_store.exceptions.BadStoreConfiguration.

(2) now it raises glance_store.exceptions.Forbidden. We should translate it to HTTPForbidden exception in Glance side (I'm not sure where exactly we should do the translation).

Revision history for this message
Ian Cordasco (icordasc) wrote :

We need to wrap this up soon. Liberty-eol is coming next week and it's not looking likely that we'll merge and release this for liberty before then.

Revision history for this message
Mike Fedosin (mfedosin) wrote :

Updated version:
* Now we check that malicious user hasn't taken cluster id as his image id (https://bugs.launchpad.net/glance/+bug/1546507/comments/64).
* Added related unit tests.

Revision history for this message
Mike Fedosin (mfedosin) wrote :

Thanks for your comments, folks!

I updated the patch, so please check it asap.

About incurred case, I propose not to modify glance code, but create a script, that shows all duplicate locations in the deployment and warns the operator if there is something wrong.
It's better because, because once my code's merged there is no possibility to set duplicate locations, so Fei Long patch seems redundant. Also, correct me if I'm wrong, there are two possibilities to set links to one image data in ceph clusters: 'rbd://<image_id>' and 'rbd://<fsid>/pool/image/<image_id>', so 'image_locations_with_url' may return wrong result and we have to handle it somehow.

Revision history for this message
YAMADA Hideki (yamada-h) wrote :

I'm sorry for the late reply.

I confirmed Sheepdog is vulnerable to this exploit. And Mike's patch #86 works well.

For your information, direct_url of Sheepdog is like this:
'sheepdog://<ip addr>:<port>:<image_id>'

Revision history for this message
Feilong Wang (flwang) wrote :

Re comment #86:

If we(Glance team) can provide the script(checking duplicate location) associated with the release note finally, I'm OK with that. Otherwise, if we just assume operators will read the release note and understand the problem correctly, then write the script correctly. For that case, I'm a little bit worried. Let's see the other reviewers' comments.

Revision history for this message
Ian Cordasco (icordasc) wrote :

Okay so we have Mike's patch on #86 that works for

- Sheepdog

- Ceph

- VMWare

And Tomoki's patch in #83 that works for

- Cinder

We're still waiting to hear back about swift yes? The stable maintenance team has agreed to give us a day or two more to fix this and have it backported to liberty so we don't have a final Liberty release that's still vulnerable to this. Can people please focus on this today?

Revision history for this message
Erno Kuvaja (jokke) wrote :

I'm currently building environments for testing Fei Long's patch from comment #82 and tbh, I'd way prefer that approach as I mentioned earlier.

While Mike's patch seem currently good (based on Ian's input about testing it) it's relying artificial and unnecessary limitations and possibility that we have missed some corner case how that still can be abused.

Only thing so far I've noticed about the Fei Long's one is what Mike referred earlier that the rdb: has two ways to refer to same data and that's not taken in account for and it's currently lacking tests.

Revision history for this message
Feilong Wang (flwang) wrote :

I just improved my patch based on the different location formats of RBD.

And I found a small bug in Mike's patch, at below code. pieces[3] is 'snap', not the image id. Here should be pieces[2].

+ # There are 2 types of 'rbd' urls:
+ # 'rbd://image' and 'rbd://fsid/pool/image/snapshot'
+ # First one is forbidden, second is okay
+ pieces = url[len(prefix):].split('/')
+ if len(pieces) == 4 and pieces[3] == image_id:
+ return True
+ return False

>>> url="rbd://b0849a66-357e-4428-a84c-f5ccd277c076/images/3c9e8443-8331-4415-aa48-55e7ac95b7c0/snap"
>>> prefix="rbd://"
>>> pieces = url[len(prefix):].split('/')
>>> pieces
['b0849a66-357e-4428-a84c-f5ccd277c076', 'images', '3c9e8443-8331-4415-aa48-55e7ac95b7c0', 'snap']

Revision history for this message
Feilong Wang (flwang) wrote :

Based on the discussion on openstack-dev maillist, seems not too much projects released their liberty-EOL, so I assume we still have some time to make this happen.

Revision history for this message
Brian Rosmaita (brian-rosmaita) wrote :

The open issues right now:
(0) Do we handle the "incurred" case in code, by script, or both?
(1) Mike/Fei Long's patches QA
(2) Tomoki's patch QA

I don't think we're going to make Liberty-EOL with this. That leaves Liberty installations vulnerable, unless they take the following steps:
(a) run a script to detect duplicate image locations and take appropriate action
(b) use two different glance nodes with separate policy and config files, one exposed to end-users and one exposed *only* to services. At a minimum, the public glance node must disallow the "set_image_location" policy for end-users.

(I need someone to verify that (a) and (b) are sufficient to block this exploit.)

So no matter how we decide on (0), it looks like we also need:
(3) script to detect duplicate image locations

As far as (3) goes, the script will have to detect duplicate locations in an arbitrary format (because an admin may have a workflow that puts the image in the backend first and then sets the image location). It will also have to handle "aliased" ceph locations (the two formats Mike mentions above). Are there any other requirements for this script that I'm missing?

Finally, I think we need to recommend that show_image_direct_url and show_multiple_locations *never* be exposed to end-users; any installation that wants to use these should run multiple Glance nodes as described in (b) above. Right now we say that these settings should be used with care because they are a security risk, but it would be good to suggest what "using with care" should look like.

Revision history for this message
Tony Breeds (o-tony) wrote :

Is it possible to get the 3 patches rolled into one?

From a stable POV If I've caught up the proposal is the apply the patches to master and backport to stable/* as opposed to the earlier option of having divergent fixes on master and stable/*

Reading the bug I don't think I can delay the liberty EOL long enough to get this into liberty but I don't mind doing glance late in the EOL rounds to try and accommodate you.

Revision history for this message
Matthew Oliver (matt-0) wrote :

After a brief look at the glace swift store, the image id is also stored in the uri. (swift://<SWIFT_URL>/<CONTAINER>/<FILE>) where the File is the image ID. As the store uri starts with swift, Mike's patch should catch and work for the swift store too.

There is a multi and single tenent swift store, where I see the multi tenant one being of most risk, just as it is for cinder as it seems just the token needs to be supplied.

Swift does support composite tokens (send glances service token and the users). This would help unless the malicious user is from the same tenant as the user. I don't see this option in the store, but this mode would make it most secure and kept the images away from the users eyes as well.

I haven't tested it however, I have many a Swift AIO, but nothing set up with glance etc. I could do that thing, but looking at the store and mikes patch it seems pretty obvious.

TL;DR: Swift store seems to store the id, Mike's patch should cover this tho, the composite token option should be added to glances swift store/glance (but maybe it already is?).

Revision history for this message
Feilong Wang (flwang) wrote :

Should I remind we still haven't resolved this?

Revision history for this message
Feilong Wang (flwang) wrote :

I have combined the patches of Mike's and mine. As for Tomoki's patch, it only changes the glance_store's code, so can't to combine.

And here is my answer for Brian's questions in #93.

(0) Do we handle the "incurred" case in code, by script, or both?
    Yes, but we can provide the script later.
(1) Mike/Fei Long's patches QA
    I have tested the patch locally, it works for me. Erro said he's testing as well. Feedback?
(2) Tomoki's patch QA
    I reviewed his patch, it's just a check and looks safe for me.

So what I suggest is:

Please help review the two patches again and put your vote in comments because we don't have gerrit in hand now.

Please do let this bug being delivered in Ocata. Thanks.

Revision history for this message
Brian Rosmaita (brian-rosmaita) wrote :

Erno, you had mentioned setting up ceph to test the patch. Did you ever get a chance to do that?

Revision history for this message
Brian Rosmaita (brian-rosmaita) wrote :

We need to up the priority on this. It needs to be ready for the non-client library freeze (week of Jan 16). I'll commit to reviewing the patches before Monday Jan 9, but they could use more eyes.

We still need to determine what to do about a DB script for operators.

Revision history for this message
Brian Rosmaita (brian-rosmaita) wrote :

Comments on "Fix based on the work of Mike and Feilong" patch
https://bugs.launchpad.net/glance/+bug/1546507/+attachment/4788885/+files/0001-Prevent-setting-locations-to-other-images.patch

I think the basic logic is OK, these are just a few things I noticed.

glance/location.py:
Two questions:
(1) Should we add a log message if we don't delete the backend data,
something like
"Data at %s not being removed as image %s does not appear to own it"
 % (location, image_id)

(2)if we do decide it's ok to delete the data, but len(locs) > 1,
should we provide a list of the locations that are now invalid (along
with the image ids that own those locations?

glance/api/v1/images.py:
glance/api/v2/images.py:
The exception message isn't quite accurate. Something like
"The location '%s' is forbidden for the configured store"
would be better.

glance/common/utils.py:
validate_external_url():
The NOTE is misleading. The function only checks to make sure the URL
contains the image_id of the image that owns the location being set.
It doesn't actually check to see if the location is in use by another
image.

glance/tests/unit/v1/test_api.py:
test_update_add_direct_rbd_location():
The location is 'rbd://another_image_id' for an image whose id is
'this_image_id', and the test correctly expects to get a 403.
We should also check that when location is 'rdb://this_image_id'
we get a 403, since that's supposed to be a disallowed URL.
(same thing in the v2 version of the test)

test_update_add_snap_styled_rbd_location():
Similar question, except this time, do we need a test to make
sure that a proper snap-style location, that is,
location = 'rbd://fsid/pool/%s/another_image_id' % image_id
returns a 2xx?
(same question about the v2 version)

By the way ... not sure if it makes sense to change this patch, but
someone went through the codebase recently and changed all the
numeric response codes to symbolic constants:
  from six.moves import http_client
and then use, for example, http_client.FORBIDDEN

Revision history for this message
Brian Rosmaita (brian-rosmaita) wrote :

Was doing some more thinking about the Mike/Fei Long patch.

I'm worried about the image_locations_with_url() function. Isn't
it going to trigger a full scan of the image_locations table for
each delete? That seems like a very expensive operation.

The other thing that occurred to me was that in the swift case,
a malicious user could set the image location to a segment of a
public image. If the segment were deleted, the entire image would
be rendered useless. I guess what I'm getting at is that if this
function only covers some cases, maybe we shouldn't include it,
especially since it seems very expensive.

What do you think?

Revision history for this message
Ian Cordasco (icordasc) wrote :

Brian, you're right that the patch as it's written will trigger full table scans. We're querying on the value column which isn't indexed (and probably can't be efficiently indexed).

Revision history for this message
Ian Cordasco (icordasc) wrote :

We do index on the image id though. We're already checking that the image id that's part of the request and the image id in the location match when we add the URL. Can we use that same validator in there to skip full-table scans unless we determine they're absolutely necessary?

Revision history for this message
Ian Cordasco (icordasc) wrote :

I guess what doing that check doesn't do is ensure that another image using the same location won't become useless via the same problem.

Revision history for this message
Brian Rosmaita (brian-rosmaita) wrote :

Was looking at this again. Fei Long, in your patch, the new code in _do_add_locations() raises exception.Forbidden. The current code raises webob.exc.HTTPForbidden (proceeding if block, when show_multiple_locations is False). Should your patch be using the webob exception in this spot?

Revision history for this message
Brian Rosmaita (brian-rosmaita) wrote :

This has me worried. In comment #79, Tomoki says:
Cinder store uses an url like 'cinder://<cinder-volume-uuid>' which does not include image-uuid

In Fei Long and Mike's patch, validate_external_url() is going to return False in this case, which would not allow setting any cinder urls -- or am I missing something?

Revision history for this message
Ian Cordasco (icordasc) wrote :

You're not missing anything Brian. You're 100% correct.

Revision history for this message
Brian Rosmaita (brian-rosmaita) wrote :

Adding Anton Chevychalov from Mirantis to this bug. He says they have a fix that's been tested in MOS that they want to contribute upstream.

Anton, please read through the bug (at least from comment #85 to the end), and let us know how your fix addresses the concerns about the current Fei Long/Mike Fedosin patch.

Revision history for this message
Mike Fedosin (mfedosin) wrote :

Lol :) Mirantis has my old fix, that's already attached here.

Revision history for this message
Anton Chevychalov (achevychalov) wrote :

Yep. And It was merged in assumption that it will be merged in upstream short. Obviously it is not right :)

Revision history for this message
Brian Rosmaita (brian-rosmaita) wrote :

Quick summary of where we are now:

(A) The situation:

Exploit can occur under *both* of the following conditions:
(1) show_multiple_locations=true (default is *false*)
    ==> deprecated in Newton, supposed to be removed in Ocata
(2) set_image_location policy allows user to set location (default is *admin* since Newton)

Even if (1) is removed in Ocata, a version of this problem will continue, plus that wouldn't help the stable branches anyway.

My understanding is that ceph and cinder are the most likely backends to be configured in this non-default way.

(B) stores and vulnerability assessment

The short story:
http, filesystem not vulnerable
cinder: vulnerable, requires patch, will be broken by requiring image_id in url
rbd, sheepdog, vmware, swift: vulnerable, requiring image_id in url is OK and will fix

We need to determine whether cinder could also require an image_id in the url for new images without breaking (i.e., is it that cinder *cannot* or *does not* do this now). Also, cinder can be configured in a kind of single-tenant mode (like swift), I'm not sure that the proposed patch prevents the exploit when in single-tenant mode.

One other thing about the fix ... it has to take into account all supported stores. Bringing that up because I was thinking that because of the cinder situation, we could maybe let the store being used say what counts as an OK url. But we cannot do that because it's possible for glance to have multiple stores (it can only use the default store for uploads, though). There was a bug Stuart fixed about the "swift+config://" scheme that reminded me of this:
https://bugs.launchpad.net/glance/+bug/1334196
So we have to worry about a case where a deployer uses ceph and then decides to add cinder as the default store--we still need to check a rbd:// url if a user adds it as a location. (The current Fei Long/Mike patch correctly does this; my point is just that as we're converging on a fix, we need to keep this in mind.)

Revision history for this message
Brian Rosmaita (brian-rosmaita) wrote :

Correction about my comment above about the cinder patch, it specifically addresses the single-tenant case.

Revision history for this message
Tomoki Sekiyama (tsekiyama) wrote :

Unfortunately cinder doesn't have hierarchy, like the path in filesystem and
swift, so it does not work meaningfully in cinder...
We could add some scheme to use volume names as URL to name it arbitrarily,
but it also have problem since multiple volume can have the same name..

Currently, instead of the URL, cinder will store the image_id and image
owner's project id in the volume metadata named "image_owner", which can be
obtained by "cinder show" or cinderclient.volumes.get(volume_id) method.
Especially in single-shared-user mode, it must be checked to restrict
manipulation by non-authorized users.

My patch for cinder store driver attached to this bug will add the check
for image owner's project id. It gets volume data from cinder based on the URL
"cinder://<uuid>", and allows to add location only if its "image_owner" metadata
shows that same owner with the image.
# Note that the metadata can only be set or modified by the owner.

Revision history for this message
Brian Rosmaita (brian-rosmaita) wrote :

People: There is no way this is going to make Ocata unless we have some action on it in the next 24 hours (and even then, it's a stretch). I realize that we're working across crazy time zones--Moscow/New York/Wellington/Tokyo--so we really need to communicate here to keep track of what we're working on. But even if we miss Ocata, we need to push on this hard and kill it off.

Fei Long/Anton: please coordinate on a new patch that takes into account comments #100, #101, #105, plus please correct the name of the new policy target in the commit message. Additionally, we're going to have to completely skip the checks for the "cinder://" scheme in the validate_external_url() utility function. I suggest a new block like the one for http, but with a comment saying the cinder check will happen in the backend driver because the cinder url path is not predictable. (Also, please make sure to read my "everyone" comment below.)

Tomoki (and people with general glance_store knowledge): I am worried that cinder in single-shared-user mode will be vulnerable to the exploit filed against swift as https://bugs.launchpad.net/glance/+bug/1334196 . This would be the case if multiple backends are configured for a glance installation; if cinder isn't the default, a user could set a "cinder://<volume_id>" url location on an image (having copied the url from a public or shared image) and the non-cinder backend will allow this location to be set. (Feel free to tell me I'm wrong about this.) If so, I think you should add a check to the cinder driver's delete() function to make sure the requestor owns the volume before deleting the volume to protect against this case.

Everyone: I don't like the complete table scan in the Fei Long/Mike patch's image_locations_with_url() function. My understanding is that this is there to deal with the "incurred" case (i.e., someone has used this exploit before the current patch, which will prevent setting "bad" urls, is in place. I think instead of including this function in the code, we should advise operators who may have exposed themselves to this vulnerability to turn on delayed_delete and handle detection of duplicate urls offline. Thoughts?

Thanks, everyone ... let's get cracking on this!

Revision history for this message
Tomoki Sekiyama (tsekiyama) wrote :

Brian,

As far as I've tested, even if location (x-image-meta-location) is specified in the POST v1/images API, and even if the cinder is not the default store, the location is passed to the cinder store driver and the owner is checked with my patch.

Just in the case, I have added the owner check to delete() and get() too.
Unit tests are also added in the patch v3.

Revision history for this message
Brian Rosmaita (brian-rosmaita) wrote :

Tomoki: Thanks for your quick response!

Revision history for this message
Anton Chevychalov (achevychalov) wrote :

I thought that we have intention to remove show_* options in Ocata. I was wrong?

Revision history for this message
Brian Rosmaita (brian-rosmaita) wrote :

Anton: removing show_multiple_locations has been delayed (it's the subject of an OSSN that's about to come out where if show_multiple_locations=False there's nothing to worry about, so we didn't want to remove it now and complicate the situation).

The show_image_direct_url option is not scheduled for removal AFAIK.

Revision history for this message
Anton Chevychalov (achevychalov) wrote :

Brian, I see no way for me to give us a result today. I need to improve my understanding of that patches and current usage of Glance in our product. Besides that is not my only task for today. I'm sorry but I think we should not make such patches in a hurry.

Revision history for this message
Brian Rosmaita (brian-rosmaita) wrote :

Anton, I respect your position--you have joined us only recently on this, and it looks like Fei Long is not actively working on the patch right now. Even though we'll miss RC-1, let's keep working on this. It would be good to have a fix that we could backport to the stable branches.

Revision history for this message
Feilong Wang (flwang) wrote :

Sorry for the late response here, I'm going to pick up this again. And I'm curious if gerrit can do a private review because the patch review on launchpad is really annoying.

Revision history for this message
Tristan Cacqueray (tristan-cacqueray) wrote :

Unfortunately gerrit doesn't support private review, and embargoed bug's patch review needs to happen on launchpad.

Revision history for this message
Brian Rosmaita (brian-rosmaita) wrote :

@Fei-Long: thanks for picking this back up. I'll keep a close watch and will review any new patches as soon as possible. Let's aim to finish this by R-17 (it's R-20 now; R-17 is the week before the summit/forum).

Revision history for this message
Brian Rosmaita (brian-rosmaita) wrote :

Fei Long, Erno, Nikhil, and I were able to discuss this issue at the Boston Summit/Forum and decided that we need to implement reference counting in order to have a robust solution. Since that is going to have a serious impact on the code, we agreed that we need to have a spec explaining how the fix will work. Since this bug is still private, the spec is attached here as a patch instead of being submitted to gerrit for review.

Please read through as soon as you can and leave comments here. I think I addressed all the issues, but I may have left something out. Or possibly this is over-complicating things. In any case, let me know what you think. Thanks!

Revision history for this message
Brian Rosmaita (brian-rosmaita) wrote :

I've been thinking about Cinder. I think the scheme proposed in the spec is vulnerable to the following exploit when Cinder is being used in multi-tenant mode:

1 Tenant A creates Image 1234 with location cinder://whatever (and Tenant A owns this data)
2 Tenant B creates Image 2222 with location cinder://whatever
3 Tenant A deletes Image 1234. Glance doesn't try to delete cinder://whatever because the location_count is non-zero
4 (Tenant A is now paying for storage of image data he's "deleted", but we'll ignore that.)
5 Tenant B deletes Image 2222. The location_counter for cinder://whatever is now zero, so Glance tries to delete cinder://whatever. This will fail because Tenant B does not own cinder://whatever, so cinder will not allow the volume to be deleted.

As a result, the scheme in the spec is vulnerable to leaving around a lot of "dark data".

My conclusion is that I'm beginning to think that we've gone overboard with this reference counting business, and that we should simply disallow multiple images using the same location. Those "legitimate scenarios" for multiple Glance image records referring to the same location resolve around the autoscale case, and nowadays autoscale is falling out of favor and people are using containers instead.

I think we should simplify the proposal. We'll still need to normalize the locations URIs for some of the backends, and we'll need to do the hashing to allow for quick lookups/uniqueness on the location, but maybe we can do that by adding an indexed column to the 'image_locations' table. Maybe we can get away with not having the to_be_deleted table, too?

Revision history for this message
John Garbutt (johngarbutt) wrote :

Wondering if we can use the new service token system to secure this quickly. Only allow requests with a Cinder and Nova service token attached to create images with RBD locations, due to implied delete permission. In a similar way, we could hide the location URL to all list/show images that don't have the service token.

I think it's good to also not allow duplicate image locations. Snapshots are cheap in ceph, as I understand it, so the system should always create another snapshot. But that seems less of a sercurity thing, more a robustness thing.

Now I am sure there are better fixes, but that service token policy fix seems a quick-ish fix to me? I could be wrong.

PS
Why were users wanting to create snapshots from ceph urls, admin only thing really?

PPS
Doesn't your spec fail to protect from people guessing / knowning a random users disk and just deleting it? I might have missread it.

Revision history for this message
Brian Rosmaita (brian-rosmaita) wrote :

Thanks, John.

Everyone: let's consider John's suggestion carefully. I was going to hold off on the service token stuff for later in the interest of getting a fix to this bug out, but obviously the proposal on the spec is so freaking complicated that complexity of the fix is no longer an issue.

@John: your PS: the problem is small deployments with only one Glance requires exposing setting the locations, so even though it should be admin only, it's open to users. Your service-token proposal would address this.

@John: your PPS: the original user's image would keep the reference count for the data at > 0, so even if someone guessed/learned the location, set it on an image, and deleted the image, Glance wouldn't try to delete the data out of the backend. On the other hand, this is a data leakage scenario that I don't like, and I think we should prevent it. So that's another reason to prohibit multiple image records from containing the same location.

Revision history for this message
Feilong Wang (flwang) wrote :

One thing I want to do since long time ago is saving the image owner in the location's metadata so that only the owner has the permission to delete the image(data). Does that make sense?

Revision history for this message
Feilong Wang (flwang) wrote :

@John, Re the 'service token' idea, does that mean we need additional work for Nova and Cinder, or they have already supported this? Thanks.

Revision history for this message
Brian Rosmaita (brian-rosmaita) wrote :

@Fei Long: if we put the image owner in the location metadata, then we'll need to have some concept of "system location metadata" so that a user can't modify the owner. I think I'd prefer to make image locations unique so that only one image record can have a particular location. Then we wouldn't need the owner metadata, because the image owner would also own that location.

Key question: is there a solid use-case requiring multiple images to refer to the same image data location? (There are a few mentioned in the spec, but I don't see them as being use cases that absolutely *must* be satisfied, I think we would be OK disallowing them.)

Revision history for this message
Brian Rosmaita (brian-rosmaita) wrote :

Note: I just subscribed Hemanth to this bug. He was subscribed earlier as a member of glance coresec, so this doesn't open up the bug to a wider audience in any way. This is on the off chance he has time to look over the spec. He's the expert on zero-downtime-database migrations, so it would be nice to have his input on the database changes.

Revision history for this message
Erno Kuvaja (jokke) wrote :

So the problem with trying to solve this by just preventing creation is that it fixes the issue only partially. All the images that might have this situation already in hands (People creating image based on public image's location to have their own metadata associated with it) are still vulnerable as soon as any of these images with multiple same location is deleted. In other words the creation time fix is not a solution for any existing cases where the reference counting would be.

Now if I have understood correctly one cannot remove snapshotted image from rdb backends, which makes this bug not disappear but be bit less hazardous. I will try to test that next week.

Revision history for this message
Tristan Cacqueray (tristan-cacqueray) wrote :

I've subscribed the OSSN group to discuss a potential Security Note regarding this bug and bug 1555590 which are both affected by the show_multiple_location feature when it is available to regular user. This is similar to the OSSN-0065 but it seems like a new Note titled "Show Multiple Location shouldn't be exposed to regular user" (or something along those lines) would be better suited.

Revision history for this message
Brian Rosmaita (brian-rosmaita) wrote :

Some points of information:

You can see that the show_multiple_locations option is described as a security risk in both the configuration file and in the API reference guide in all stable branches:
https://github.com/openstack/glance/blob/stable/newton/etc/glance-api.conf#L305-L308
https://github.com/openstack/glance/blob/stable/newton/api-ref/source/v2/images-parameters.yaml#L373-L385

The option was scheduled to be removed in Ocata, but we decided to leave it in because it's mentioned in OSSN-0065 as a way to be sure you're not subject to the vulnerability:
https://github.com/openstack/glance/commit/bd5a23df095af9f2d5b21f3350674fb6e36abbe5

Revision history for this message
Matthew Oliver (matt-0) wrote :

Sorry, I've been rather distracted, I was involved in the Rackspace layoffs last year, but am now onboard with Suse, and this somehow fell through the cracks. Where is this at, it's now 2018.

As for an earlier comment about Swift SLO segments. The swift-store saves segments as:

  "%s-%05d" % (location.obj, chunk_id)

Where the location.obj is the image_id. So segments also contain the image_id, so from my understanding of the patch, these segments also can't be used.

Maybe something to talk about at the PTG?

Revision history for this message
Jeremy Stanley (fungi) wrote :

Sorry about that. The OpenStack VMT has considered it closed from our perspective since Tristan switched the security advisory task to Opinion over two years ago (comment #39). It looks like the OSSN editors probably never noticed Tristan's suggestion nearly a year ago (comment #133) either, so I've subscribed the ossg-coresec reviewers as well.

I recommend we switch this report to a normal Public bug this Friday, August 10, unless there are objections in the meantime. As a community we shouldn't be keeping bug reports private for more than a month (my opinion), so 2.5 years is beyond excessive.

Revision history for this message
Erno Kuvaja (jokke) wrote :

Thanks Jeremy,

I totally agree. I'm not even sure to what extent this is still an issue/exploitable. Perhaps at this stage it would be good to open up and get the work going to fix what still needs to be fixed.

Revision history for this message
Jeremy Stanley (fungi) wrote :

Seeing no objections, I'm switching this to Public status (long overdue!).

information type: Private Security → Public
tags: added: security
Changed in ossa:
importance: Critical → Undecided
Revision history for this message
Matthew Oliver (matt-0) wrote :

Great thanks Jeremy.

So that's a good question, what is the state of this bug. There was some patches added to the bug while embargoed, but then there was discussion about the reference counting approach. So what's the plan here?

Can we confirm this is still an issue? Glance devs?

Jeremy Stanley (fungi)
description: updated
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.