[RFE] Network cascade deletion API call

Bug #1870319 reported by Slawek Kaplonski
This bug affects 3 people
Affects Status Importance Assigned to Milestone
Slawek Kaplonski

Bug Description

It came originally from Kuryr team and it was already discussed on the PTG in Shanghai.

Use case:
As a cloud user I would like to have API command similar to openstack loadbalancer delete --cascade LB_ID, where I trigger the deletion of a network (openstack network delete --cascade NET_ID) , and it automatically deletes the associated ports/subports and subnets.

Currently what kuryr needs to do to delete "namespace" is:

* Get the ACTIVE ports in the subnet
* Get the trunks
* For each of the ACTIVE port, obtain what trunk they are attached to and call neutron to detach it
* Remove the ports (one by one, there is no bulk deletion)
* Get the DOWN ports in the given subnet
* Remove the ports (one by one, there is no bulk deletion)
* Detach the subnet from the router
* Remove the network (which will also remove the subnet)

Revision history for this message
YAMAMOTO Takashi (yamamoto) wrote :

can this be a separate service plugin like get-me-a-network thing?

Revision history for this message
Slawek Kaplonski (slaweq) wrote :

I think that this can be implemented as separate service plugin.

Revision history for this message
Slawek Kaplonski (slaweq) wrote :

We discussed that on last drivers meeting. Here is summary of this discussion so far:

* amotoki asked if implementation of bulk port deletion would solve main problem which Kuryr has today - it may help if bulk deletion will be also "cascade" and will remove all trunk subports also,

* if we would go with this bulk port deletion with cascade subports deletion, we would be probably close to implement cascade network deletion as well,

* open question: what response should be in case of partial failure?

Revision history for this message
Nate Johnston (nate-johnston) wrote :

Proposed plan here:

1. Add an option to the existing singleton port deletion behavior that will also remove it from a trunk if it is a trunk subport.

2. Implement bulk object deletion, starting with port deletion.

If these steps work then all that will be left from your earlier list is the initial call to list the existing ports, and at the end to detach the subnet from the router and the remove the network.

Revision history for this message
Slawek Kaplonski (slaweq) wrote :

Getting back to this. I was doing some research about return codes in such cases and unfortunately it seems for me that there are no strict guidelines for that.
I checked how e.g. Octavia's API with cascade loadbalancer deletion works and it returns 204 always but their API is asynchronous so user needs to periodically check if all is already gone or not.

AFAIU In case of Neutron API is synchronous when speaking about resources in DB so would need to return something after network is actually deleted. And I think that the best solution would be to return code according to the success or failure of deletion of network resource as this is what is requested. So e.g. if some ports were deleted, others not and finally network wasn't deleted due to those leftover ports, I think we should return e.g. 409 Conflict in such case as network couldn't be deleted really.
If You have other ideas, I'm open for the proposals :)

Revision history for this message
Slawek Kaplonski (slaweq) wrote :

we discussed that RFE on today's drivers meeting again https://meetings.opendev.org/meetings/neutron_drivers/2021/neutron_drivers.2021-09-10-14.05.log.html#l-124 and we decided to approve it.
Now I will work on the spec where I will propose details about new API for bulk port deletion and cascade network deletion.

Changed in neutron:
status: Triaged → Confirmed
tags: added: rfe-approved
removed: rfe-triaged
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix proposed to neutron-specs (master)

Related fix proposed to branch: master
Review: https://review.opendev.org/c/openstack/neutron-specs/+/810822

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to neutron-specs (master)

Reviewed: https://review.opendev.org/c/openstack/neutron-specs/+/810822
Committed: https://opendev.org/openstack/neutron-specs/commit/a0d3454d8f6690c3ff711a4d7c056d2a8ecd2799
Submitter: "Zuul (22348)"
Branch: master

commit a0d3454d8f6690c3ff711a4d7c056d2a8ecd2799
Author: Slawek Kaplonski <email address hidden>
Date: Fri Sep 24 09:10:28 2021 +0200

    Add spec for Network cascade deletion

    Co-authored-by: Sharon Koech <email address hidden>

    Related-Bug: #1870319
    Change-Id: Ic9d08e76e3bdaaa306017f796136d07d966c0cc3

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Related fix merged to neutron-lib (master)

Reviewed: https://review.opendev.org/c/openstack/neutron-lib/+/849046
Committed: https://opendev.org/openstack/neutron-lib/commit/19cb07025b9f4bb74ab4288a19e2651aead0c530
Submitter: "Zuul (22348)"
Branch: master

commit 19cb07025b9f4bb74ab4288a19e2651aead0c530
Author: Sharon Koech <email address hidden>
Date: Fri Jun 17 11:21:40 2022 +0000

    Network Cascade Deletion Extension

    This change introduces the optional boolean argument, ``cascade``,
    to the ``DELETE`` API request, that when defined as ``True``,
    deletes a network and all its associated resources.

    Related-Spec: https://review.opendev.org/c/openstack/neutron-specs/+/810822
    Related-Bug: #1870319
    Change-Id: Ib26526650b61062f753a470f5b33b8f53c0a7947

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Change abandoned on neutron (master)

Change abandoned by "Slawek Kaplonski <email address hidden>" on branch: master
Review: https://review.opendev.org/c/openstack/neutron/+/849776
Reason: This review is > 4 weeks without comment, and failed Zuul jobs the last time it was checked. We are abandoning this for now. Feel free to reactivate the review by pressing the restore button and leaving a 'recheck' comment to get fresh test results.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.