Progress_watermark needed to be reset when next iterator start

Bug #1639091 reported by linbing
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Compute (nova)
In Progress
Undecided
linbing

Bug Description

in nova/virt/libvrit/driver of _live_migration_monitor, the progress_watermark needed to be reset to 0 when progress_watermark is smaller than info.data_remaning.

when migration iteration began, the progress_watermark maybe the mark of pre data_remaining of one iteration, so when in a big ram writing env, it is may cause multiple migration iteration, so it is may cause data_remaining > progress_watermark, there is a log output to recode this, so when this happened, and next iteration come in, the progress_watermark is small than data_remaining forever, and progress_time may never be reset, it's will cause progress_timeout, however, it's not progress_timeout.

linbing (hawkerous)
Changed in nova:
assignee: nobody → linbing (hawkerous)
status: New → In Progress
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.