Resources owned by a project/tenant are not cleaned up after that project is deleted from keystone

Bug #967832 reported by Lars Erik Pedersen on 2012-03-28
144
This bug affects 20 people
Affects Status Importance Assigned to Milestone
Glance
Undecided
Unassigned
OpenStack Compute (nova)
Medium
Ryan Hallisey
OpenStack Dashboard (Horizon)
High
Unassigned
OpenStack Identity (keystone)
High
Dolph Mathews

Bug Description

If you have running instances in a tenant, then remove all the users, and finally delete the tenant, the instances are still running. Causing serious trouble, since nobody has access to delete them. Also affects the "instances" page in horizon. It will break if this scenario occurs.

description: updated
description: updated
Gabriel Hurley (gabriel-hurley) wrote :

This is actually tracked in the blueprint tenant-deletion (https://blueprints.launchpad.net/horizon/+spec/tenant-deletion), but I'll leave the bug as a reminder.

Changed in horizon:
importance: Undecided → Medium
status: New → Confirmed
importance: Medium → High
Joseph Heck (heckj) wrote :

Needs some discussion about how resources should be treated from a UX point of view at the Folsom design summit. My inclination would be to destroy all resources owned by a tenant when it's being removed, but that may be too harsh, and doesn't take into account future states where trust may be delegated.

Regardless, that also implies an inversion of control where Keystone would potentially need to reach into other systems with administrative access to impact resources, which I'm not sure I like.

tags: added: blueprint
Changed in keystone:
status: New → Confirmed
importance: Undecided → High
Jay Pipes (jaypipes) wrote :

For the record, I agree that the action should be to terminate and delete all resources attached to a tenant when the tenant is deleted...

Tom Fifield (fifieldt) on 2012-06-07
Changed in nova:
status: New → Confirmed
Thierry Carrez (ttx) on 2012-06-07
Changed in nova:
importance: Undecided → High
Michael Still (mikal) on 2012-08-10
tags: added: ops
Matt Joyce (matt-nycresistor) wrote :

I agree that, when a tenant is deleted, the running instances associated with it should be terminated.

Tong Li (litong01) wrote :

The easier and non destructive approach for this problem may be to check the resources (such as images, instances, volumes) under the tenant and make sure there is not anything left to be hanging around without anyone can access them when tenant get deleted. When the detects any such resource, the delete tenant should not proceed and should tell the user that it can not be deleted due to the resource issues. It also forces the operator to handle these resources first. In many cases, this approach might be a safer way to deal with deletion.

Stephan Fabel (sfabel) wrote :

In regards to #5, for what it's worth, I would like a choice... let me delete all resources but confirm with me first.

I like a verification here where there is both a 'are you sure' along with an optional password, or other validated check, will prevent the last user from being deleted if resources are still active... however if the override is approved then the instances should be blown away with the last user

Gabriel Hurley (gabriel-hurley) wrote :

Closing this as a bug as it is superseded by the blueprint https://blueprints.launchpad.net/horizon/+spec/tenant-deletion

Changed in horizon:
status: Confirmed → Won't Fix
Joe Gordon (jogo) wrote :

This won't be fixed in Nova in Grizzly, and is something that is cross project and needs to be hashed out at the Summit.

Changed in nova:
importance: High → Medium
Adam Young (ayoung) on 2013-03-07
Changed in keystone:
milestone: none → havana-1
yelu (yeluaiesec) on 2013-03-26
Changed in keystone:
assignee: nobody → yelu (yeluaiesec)
Joshua Harlow (harlowja) wrote :

Wouldn't a good way be to have keystone broadcast user/tenant/role deletion/creation and let downstream systems consume said messages? Then nova can react however it wants, glance to, and any other downstream system that desires to take some kind of action when users/tenants/roles are deleted/added...

yelu (yeluaiesec) on 2013-04-09
Changed in keystone:
assignee: yelu (yeluaiesec) → nobody
Thierry Carrez (ttx) on 2013-05-14
no longer affects: nova/havana
Thierry Carrez (ttx) wrote :

<dolphm> that's being worked in bp notifications
<dolphm> https://blueprints.launchpad.net/keystone/+spec/notifications
<ttx> so for havana-2 ?
<dolphm> which is blocked by https://blueprints.launchpad.net/keystone/+spec/unified-logging-in-keystone ... which we just added to m1 and is nearly complete
<dolphm> ttx: yes, looks to be on pace for m2

Changed in keystone:
milestone: havana-1 → havana-2
Adam Young (ayoung) wrote :

The RPC mechanism assumes eventlet. Many people are running with Keystone in HTTPD and without eventlet. Is it possible that we can do the notification mechanism using straight AMQP and punt on 0Mq for a first implementation? We just need to be able to publish notifications, not accept them, so Keystone does not need the full RPC.

Dolph Mathews (dolph) wrote :

Removed m2 target because bp notifications has some blockers; it looks like it won't land during Havana at this point.

Changed in keystone:
milestone: havana-2 → none
Adam Young (ayoung) on 2013-07-31
Changed in keystone:
assignee: nobody → Victoria Martínez de la Cruz (vkmc)
Changed in keystone:
assignee: Victoria Martínez de la Cruz (vkmc) → nobody
Dolph Mathews (dolph) on 2013-08-29
Changed in keystone:
assignee: nobody → Dolph Mathews (dolph)
milestone: none → havana-3
Dolph Mathews (dolph) wrote :

keystone now emits notifications when projects/tenants are delete as part of https://blueprints.launchpad.net/keystone/+spec/notifications

Changed in keystone:
status: Confirmed → Fix Committed
Thierry Carrez (ttx) on 2013-09-05
Changed in keystone:
status: Fix Committed → Fix Released
Dolph Mathews (dolph) wrote :
summary: - Instances are still running when a tenant are deleted
+ Resources owned by a project/tenant are not cleaned up after that
+ project is deleted from keystone
Iccha Sethi (iccha-sethi) wrote :

I feel it is the responsibility of something external which does clean tenant deletion like horizon(https://blueprints.launchpad.net/horizon/+spec/tenant-deletion) should be doing the image members clean up

Thierry Carrez (ttx) on 2013-10-17
Changed in keystone:
milestone: havana-3 → 2013.2
Mark Washenberger (markwash) wrote :

I don't think it makes sense to look at this as a glance issue.

I agree some work may be needed in glance as in other projects to respond to tenant deletion events in a sensible way. But glance shouldn't be innovating that independently. Sounds like a cross-cutting-concern for all projects.

Changed in glance:
status: New → Won't Fix
Assaf Muller (amuller) on 2014-01-01
Changed in neutron:
status: New → Confirmed
Assaf Muller (amuller) on 2014-03-03
Changed in neutron:
assignee: nobody → Assaf Muller (amuller)
Dolph Mathews (dolph) wrote :

This was brought up again (specifically for nova) in bug https://bugs.launchpad.net/nova/+bug/1288230

Fix proposed to branch: master
Review: https://review.openstack.org/92599

Changed in neutron:
status: Confirmed → In Progress

Fix proposed to branch: master
Review: https://review.openstack.org/92600

Not just the instances, but likely the subnet, router, gateway etc. Once
the tenant is gone it is very difficult to get the id's you need. If you
do a quantum (nova) port-list you will likely see lots of floating and
fixed ips related to it as well.

On Wed, May 7, 2014 at 9:37 AM, Openstack Gerrit
<email address hidden>wrote:

> Fix proposed to branch: master
> Review: https://review.openstack.org/92599
>
> ** Changed in: neutron
> Status: Confirmed => In Progress
>
> --
> You received this bug notification because you are subscribed to
> neutron.
> https://bugs.launchpad.net/bugs/967832
>
> Title:
> Resources owned by a project/tenant are not cleaned up after that
> project is deleted from keystone
>
> Status in OpenStack Image Registry and Delivery Service (Glance):
> Won't Fix
> Status in OpenStack Dashboard (Horizon):
> Won't Fix
> Status in OpenStack Identity (Keystone):
> Fix Released
> Status in OpenStack Neutron (virtual network service):
> In Progress
> Status in OpenStack Compute (Nova):
> Confirmed
>
> Bug description:
> If you have running instances in a tenant, then remove all the users,
> and finally delete the tenant, the instances are still running.
> Causing serious trouble, since nobody has access to delete them. Also
> affects the "instances" page in horizon. It will break if this
> scenario occurs.
>
> To manage notifications about this bug go to:
> https://bugs.launchpad.net/glance/+bug/967832/+subscriptions
>

Ryan Hallisey (rthall14) on 2014-08-18
Changed in nova:
assignee: nobody → Ryan Hallisey (rthall14)
Assaf Muller (amuller) wrote :

Ryan, I *just* resumed working on this issue today (In Neutron). I think we should have a summit design session about an OpenStack wide solution. I hope you're coming to Paris :)

Ryan Hallisey (rthall14) wrote :

Hey Assaf,
I also picked this up again only about a week ago.
That would be awesome if we could do a summit design session about this solution!
I sumbitted a presentation related to openstack and selinux, so hopefully
I will get accepted and be headed to Paris!

Fix proposed to branch: master
Review: https://review.openstack.org/115965

Fix proposed to branch: master
Review: https://review.openstack.org/115966

Change abandoned by Salvatore Orlando (<email address hidden>) on branch: master
Review: https://review.openstack.org/92600
Reason: This patch has been inactive long enough that I think it's safe to abandon.
The author can resurrect it if needed.

Change abandoned by Salvatore Orlando (<email address hidden>) on branch: master
Review: https://review.openstack.org/115964
Reason: This patch has been inactive long enough that I think it's safe to abandon.
The author can resurrect it if needed.

Change abandoned by Salvatore Orlando (<email address hidden>) on branch: master
Review: https://review.openstack.org/115965
Reason: This patch has been inactive long enough that I think it's safe to abandon.
The author can resurrect it if needed.

Change abandoned by Salvatore Orlando (<email address hidden>) on branch: master
Review: https://review.openstack.org/115966
Reason: This patch has been inactive long enough that I think it's safe to abandon.
The author can resurrect it if needed.

Change abandoned by Assaf Muller (<email address hidden>) on branch: master
Review: https://review.openstack.org/92599

Matt Riedemann (mriedem) wrote :

Did anything happen at the Kilo summit in Paris about this? Are there any mailing list threads on this because if it hasn't come to a summit yet we should talk about it in the cross-project sessions in Vancouver.

A simple approach w/o listening on event notifications (which would be the slickest way) would be to have a periodic task running which is checking resource tenants to see if the tenant exists and if not, reap them (like we have for orphaned instances).

Dolph Mathews (dolph) wrote :

I agree with Mark's assertion in comment #17 that Glance shouldn't be going about a solution alone, but this certainly affects Glance.

Ian Cordasco (icordasc) wrote :

This also certainly affects Horizon

Changed in glance:
status: Won't Fix → Confirmed
Assaf Muller (amuller) on 2015-11-20
Changed in neutron:
assignee: Assaf Muller (amuller) → nobody
status: In Progress → Confirmed
no longer affects: neutron
Sean Dague (sdague) wrote :

The Tokyo Summit solution here was that this should be done via an osc plugin. There are really dramatic issues with auto delete from keystone deletes. Many sites need an archive process. Nova itself soft deletes many resources, and even has the ability to set an undo time on some of them.

This shouldn't be an automatic process in the cloud, it should be deliberate. Just like not deleting all the files on your system owned by a user if you remove that user from /etc/passwd.

Changed in nova:
status: Confirmed → Won't Fix
To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers

Related blueprints