Activity log for bug #2054637

Date Who What changed Old value New value Message
2024-02-22 06:58:56 Saravanan Manickam bug added bug
2024-02-22 06:59:51 Saravanan Manickam summary Share deletion is failing at NetApp storage when DHSS=True is used for ONTAP versions >9.13.1 [Manila][NetApp] Share deletion is failing at NetApp storage when DHSS=True is used for ONTAP versions >9.13.1
2024-02-22 07:09:19 Saikumar Pulluri summary [Manila][NetApp] Share deletion is failing at NetApp storage when DHSS=True is used for ONTAP versions >9.13.1 [Manila][NetApp] Share deletion is failing at NetApp storage when DHSS=True is used for ONTAP versions >=9.13.1
2024-02-22 07:09:29 Saikumar Pulluri description Share deletion is failing at NetApp storage when DHSS=True is used for ONTAP versions >9.13.1. It happens mainly for the shares created from snapshots. Share from Snapshot is actually creating flexclones in ONTAP. Starting from ONTAP 13.1, it has been done intentionally that,the deleted clones/volumes are kept back until retention period expires which is 12hours by default. This is to cover the use case of if user has mistakenly deleted flexclone volumes. To avoid waiting for the retention period,and to delete the flexclone shares immediately we can follow one of this. 1) Force delete the flexclone share which will delete the volume immediately in ONTAP 2) Set the retention period to 0 for the new share server created. 3) Do "volume recovery-queue purge-all" This problem would be more imminent when we use DHSS=True, where the new share server (vserver) and shares created. We'll not be able to delete such shares from OpenStack, and it would be having lot of shares in deleting state. I prefer, we can implement option #2 in NetApp driver code via extra spec option. This is also related to deferred deletion code done by https://review.opendev.org/c/openstack/manila/+/907051?tab=comments where the fix is taken care of handling it generically. As part of this bug, we can look at fixing this at NetApp storage. aff250-astra-01-02::> volume clone show Parent Parent Parent Vserver FlexClone Vserver Volume Snapshot State Type ms-nfs2 vol1_clone ms-nfs2 vol1 vol1-snap online RW aff250-astra-01-02::> aff250-astra-01-02::> volume offline -vserver ms-nfs2 -volume vol1_clone Volume "ms-nfs2:vol1_clone" is now offline. aff250-astra-01-02::> volume destroy -vserver ms-nfs2 -volume vol1_clone -force [Job 10400] Job is queued: Delete vol1_clone. Warning: Unable to list entries for kernel on node "aff250-astra-01": Volume is offline. Volume "ms-nfs2:vol1_clone" destroyed. aff250-astra-01-02::> volume clone show There are no entries matching your query. aff250-astra-01-02::> Share deletion is failing at NetApp storage when DHSS=True is used for ONTAP versions >=9.13.1. It happens mainly for the shares created from snapshots. Share from Snapshot is actually creating flexclones in ONTAP. Starting from ONTAP 13.1, it has been done intentionally that,the deleted clones/volumes are kept back until retention period expires which is 12hours by default. This is to cover the use case of if user has mistakenly deleted flexclone volumes. To avoid waiting for the retention period,and to delete the flexclone shares immediately we can follow one of this. 1) Force delete the flexclone share which will delete the volume immediately in ONTAP 2) Set the retention period to 0 for the new share server created. 3) Do "volume recovery-queue purge-all" This problem would be more imminent when we use DHSS=True, where the new share server (vserver) and shares created. We'll not be able to delete such shares from OpenStack, and it would be having lot of shares in deleting state. I prefer, we can implement option #2 in NetApp driver code via extra spec option. This is also related to deferred deletion code done by https://review.opendev.org/c/openstack/manila/+/907051?tab=comments where the fix is taken care of handling it generically. As part of this bug, we can look at fixing this at NetApp storage. aff250-astra-01-02::> volume clone show Parent Parent Parent Vserver FlexClone Vserver Volume Snapshot State Type ms-nfs2 vol1_clone ms-nfs2 vol1 vol1-snap online RW aff250-astra-01-02::> aff250-astra-01-02::> volume offline -vserver ms-nfs2 -volume vol1_clone Volume "ms-nfs2:vol1_clone" is now offline. aff250-astra-01-02::> volume destroy -vserver ms-nfs2 -volume vol1_clone -force [Job 10400] Job is queued: Delete vol1_clone. Warning: Unable to list entries for kernel on node "aff250-astra-01": Volume is offline. Volume "ms-nfs2:vol1_clone" destroyed. aff250-astra-01-02::> volume clone show There are no entries matching your query. aff250-astra-01-02::>
2024-02-22 15:23:48 Goutham Pacha Ravi tempest: status New Invalid
2024-02-22 15:23:57 Goutham Pacha Ravi bug task added manila
2024-02-28 21:02:33 Vida Haririan manila: status New Triaged
2024-03-14 20:02:32 chuan137 bug added subscriber chuan137