Comment 114 for bug 1546507

Revision history for this message
Brian Rosmaita (brian-rosmaita) wrote :

People: There is no way this is going to make Ocata unless we have some action on it in the next 24 hours (and even then, it's a stretch). I realize that we're working across crazy time zones--Moscow/New York/Wellington/Tokyo--so we really need to communicate here to keep track of what we're working on. But even if we miss Ocata, we need to push on this hard and kill it off.

Fei Long/Anton: please coordinate on a new patch that takes into account comments #100, #101, #105, plus please correct the name of the new policy target in the commit message. Additionally, we're going to have to completely skip the checks for the "cinder://" scheme in the validate_external_url() utility function. I suggest a new block like the one for http, but with a comment saying the cinder check will happen in the backend driver because the cinder url path is not predictable. (Also, please make sure to read my "everyone" comment below.)

Tomoki (and people with general glance_store knowledge): I am worried that cinder in single-shared-user mode will be vulnerable to the exploit filed against swift as https://bugs.launchpad.net/glance/+bug/1334196 . This would be the case if multiple backends are configured for a glance installation; if cinder isn't the default, a user could set a "cinder://<volume_id>" url location on an image (having copied the url from a public or shared image) and the non-cinder backend will allow this location to be set. (Feel free to tell me I'm wrong about this.) If so, I think you should add a check to the cinder driver's delete() function to make sure the requestor owns the volume before deleting the volume to protect against this case.

Everyone: I don't like the complete table scan in the Fei Long/Mike patch's image_locations_with_url() function. My understanding is that this is there to deal with the "incurred" case (i.e., someone has used this exploit before the current patch, which will prevent setting "bad" urls, is in place. I think instead of including this function in the code, we should advise operators who may have exposed themselves to this vulnerability to turn on delayed_delete and handle detection of duplicate urls offline. Thoughts?

Thanks, everyone ... let's get cracking on this!