Comment 3 for bug 1836310

Revision history for this message
Daniel Tapia (danielarthurt) wrote :

Hi rendl1!,

Thank you for your time on this,

I was investigating your report, and I tried to reproduce this bug.

Indeed, space savings will be lost because the data will be in two places instead of one, while waiting for action to complete the migration. However, no warning of this kind was observed in the log file neither in the user-message-list during my tests.

Since we provide a two-phase migration API and the cutover action is set to wait as design, if the user doesn't trigger the 'volume move trigger-cutover' immediately after the phase 1 of migration is complete, will result in waste space. So, a suggestion to does not waste space waiting for the migration-complete is to automate the "manila migration-complete" if desired.

The warning messages received during the migration_complete, where the request 'volume-move-trigger-cutover' resides were:

- Volume move operation for share #share_id is not complete. Current Phase: replicating. Retrying.
- Volume move operation for share #share_id is not complete. Current Phase: cutover. Retrying.

The last message will remain until the 'wait_for_cutover_completion' detects that the cutover has successfully ended or the retries were exhausted.

For my test scenario, it was tested with backend configured using dhss=true and dhss=false:

 - share with thin_provisioning -> share with thin_provisioning
 - share without thin_provisioning -> share with thin_provisioning
 - share with thin_provisioning -> share without thin_provisioning
 - share with thin_provisioning/deduplication -> share with thin_provisioning/deduplication
 - share without thin_provisioning/deduplication -> share with thin_provisioning/deduplication
 - share with thin_provisioning/deduplication -> share without thin_provisioning/deduplication

Doing so, if no more information is provided, the bug will be marked as incomplete.

If someone else wants to investigate this bug in the future, please use the NetApp driver updated with this bug fix: https://review.opendev.org/#/c/712512/

If not, please investigate the bug using NetApp based extra_specs format (e.g. netapp:thin_provisiong = True ).