[RFE] Network cascade deletion API call

Bug #1870319 reported by Slawek Kaplonski
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
neutron
Wishlist
Slawek Kaplonski

Bug Description

It came originally from Kuryr team and it was already discussed on the PTG in Shanghai.

Use case:
As a cloud user I would like to have API command similar to openstack loadbalancer delete --cascade LB_ID, where I trigger the deletion of a network (openstack network delete --cascade NET_ID) , and it automatically deletes the associated ports/subports and subnets.

Currently what kuryr needs to do to delete "namespace" is:

* Get the ACTIVE ports in the subnet
* Get the trunks
* For each of the ACTIVE port, obtain what trunk they are attached to and call neutron to detach it
* Remove the ports (one by one, there is no bulk deletion)
* Get the DOWN ports in the given subnet
* Remove the ports (one by one, there is no bulk deletion)
* Detach the subnet from the router
* Remove the network (which will also remove the subnet)

Revision history for this message
YAMAMOTO Takashi (yamamoto) wrote :

can this be a separate service plugin like get-me-a-network thing?

Revision history for this message
Slawek Kaplonski (slaweq) wrote :

I think that this can be implemented as separate service plugin.

Revision history for this message
Slawek Kaplonski (slaweq) wrote :

We discussed that on last drivers meeting. Here is summary of this discussion so far:

* amotoki asked if implementation of bulk port deletion would solve main problem which Kuryr has today - it may help if bulk deletion will be also "cascade" and will remove all trunk subports also,

* if we would go with this bulk port deletion with cascade subports deletion, we would be probably close to implement cascade network deletion as well,

* open question: what response should be in case of partial failure?

Revision history for this message
Nate Johnston (nate-johnston) wrote :

Proposed plan here:

1. Add an option to the existing singleton port deletion behavior that will also remove it from a trunk if it is a trunk subport.

2. Implement bulk object deletion, starting with port deletion.

If these steps work then all that will be left from your earlier list is the initial call to list the existing ports, and at the end to detach the subnet from the router and the remove the network.

Revision history for this message
Slawek Kaplonski (slaweq) wrote :

Getting back to this. I was doing some research about return codes in such cases and unfortunately it seems for me that there are no strict guidelines for that.
I checked how e.g. Octavia's API with cascade loadbalancer deletion works and it returns 204 always but their API is asynchronous so user needs to periodically check if all is already gone or not.

AFAIU In case of Neutron API is synchronous when speaking about resources in DB so would need to return something after network is actually deleted. And I think that the best solution would be to return code according to the success or failure of deletion of network resource as this is what is requested. So e.g. if some ports were deleted, others not and finally network wasn't deleted due to those leftover ports, I think we should return e.g. 409 Conflict in such case as network couldn't be deleted really.
If You have other ideas, I'm open for the proposals :)

Revision history for this message
Slawek Kaplonski (slaweq) wrote :

we discussed that RFE on today's drivers meeting again https://meetings.opendev.org/meetings/neutron_drivers/2021/neutron_drivers.2021-09-10-14.05.log.html#l-124 and we decided to approve it.
Now I will work on the spec where I will propose details about new API for bulk port deletion and cascade network deletion.

Changed in neutron:
status: Triaged → Confirmed
tags: added: rfe-approved
removed: rfe-triaged
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to neutron-specs (master)

Related fix proposed to branch: master
Review: https://review.opendev.org/c/openstack/neutron-specs/+/810822

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers