Space savings from efficiency could be lost druing migrations

Bug #1836310 reported by rendl1
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Shared File Systems Service (Manila)
Expired
Undecided
Unassigned

Bug Description

详细信息
事件:
mgmt.vopl.move.cutover.deferred.wait: Cutover phase has been deferred for the volume move operation with ID '2099' on volume 'share_e2288a45_c914_4cfa_a2dd_4dfa841b85f4' on Vserver 'svm02_manila' to destination aggregate 'aggr03_node01_prod', because of 'cutover action: wait, waiting for cutover trigger'. Space savings from efficiency could be lost.
消息名称:
mgmt.vopl.move.cutover.deferred.wait
序列号:
65690
说明:
This message occurs when the volume move job cannot be completed because the cutover phase has been deferred due to the user specifying the cutover action as 'wait'.
操作:
Use the 'volume move trigger-cutover' command to attempt the cutover phase for this volume move operation.

详细信息
事件:
mgmt.vopl.move.cut.entryFail: The volume move operation of volume 'share_e2288a45_c914_4cfa_a2dd_4dfa841b85f4' in Vserver 'svm02_manila' to destination aggregate 'aggr03_node01_prod' did not enter the cutover phase. The system provided the additional explanation: 'Preparing source volume for cutover: Deduplication is not ready for volmove cutover'. After a short delay, the volume move operation will reattempt cutover entry. The job ID for the volume move job is '2099'.
消息名称:
mgmt.vopl.move.cut.entryFail
序列号:
65700
说明:
This message occurs when the volume move operation cannot enter the cutover phase. The original source volume is functional. After a short delay, the volume move operation will reattempt cutover entry.

Tags: netapp
Jason Grosso (jgrosso)
Changed in manila:
assignee: nobody → Carlos Augusto Fagundes dos Santos (carloss)
assignee: Carlos Augusto Fagundes dos Santos (carloss) → Carlos Eduardo (silvacarlose)
Douglas Viroel (dviroel)
tags: added: netapp
Revision history for this message
Vida Haririan (vhariria) wrote :
Revision history for this message
Carlos Eduardo (silvacarlose) wrote :

Hi, rendl1!

Thanks for reporting this bug.

We are willing to reproduce the issue, but in order to do this, we need to get some more informations about the issue itself and the environment, for instance:

- The steps to reproduce the bug
- Informations about the configured backend - i.e. If it is operating under dhss=true or false, etc.

Could you please provide this informations for us?

Changed in manila:
status: New → Incomplete
Revision history for this message
Daniel Tapia (danielarthurt) wrote :

Hi rendl1!,

Thank you for your time on this,

I was investigating your report, and I tried to reproduce this bug.

Indeed, space savings will be lost because the data will be in two places instead of one, while waiting for action to complete the migration. However, no warning of this kind was observed in the log file neither in the user-message-list during my tests.

Since we provide a two-phase migration API and the cutover action is set to wait as design, if the user doesn't trigger the 'volume move trigger-cutover' immediately after the phase 1 of migration is complete, will result in waste space. So, a suggestion to does not waste space waiting for the migration-complete is to automate the "manila migration-complete" if desired.

The warning messages received during the migration_complete, where the request 'volume-move-trigger-cutover' resides were:

- Volume move operation for share #share_id is not complete. Current Phase: replicating. Retrying.
- Volume move operation for share #share_id is not complete. Current Phase: cutover. Retrying.

The last message will remain until the 'wait_for_cutover_completion' detects that the cutover has successfully ended or the retries were exhausted.

For my test scenario, it was tested with backend configured using dhss=true and dhss=false:

 - share with thin_provisioning -> share with thin_provisioning
 - share without thin_provisioning -> share with thin_provisioning
 - share with thin_provisioning -> share without thin_provisioning
 - share with thin_provisioning/deduplication -> share with thin_provisioning/deduplication
 - share without thin_provisioning/deduplication -> share with thin_provisioning/deduplication
 - share with thin_provisioning/deduplication -> share without thin_provisioning/deduplication

Doing so, if no more information is provided, the bug will be marked as incomplete.

If someone else wants to investigate this bug in the future, please use the NetApp driver updated with this bug fix: https://review.opendev.org/#/c/712512/

If not, please investigate the bug using NetApp based extra_specs format (e.g. netapp:thin_provisiong = True ).

Changed in manila:
assignee: Carlos Eduardo (silvacarlose) → nobody
Revision history for this message
Launchpad Janitor (janitor) wrote :

[Expired for OpenStack Shared File Systems Service (Manila) because there has been no activity for 60 days.]

Changed in manila:
status: Incomplete → Expired
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.