Slow backups of large files, blocksize to small ?
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Duplicity |
Fix Released
|
Medium
|
Unassigned |
Bug Description
I'm using duplicity to backup large files, but backup seem to last forever.
In seems this could be related to the blocksize used, it also seems the blocksize calculated doesn't match the comment
In file diffdir.py:
def get_block_
"""
Return a reasonable block size to use on files of length file_len
If the block size is too big, deltas will be bigger than is
necessary. If the block size is too small, making deltas and
patching can take a really long time.
"""
if file_len < 1024000:
return 512 # set minimum of 512 bytes
else:
# Split file into about 2000 pieces, rounding to 512
return min(file_blocksize, 2048L)
For files larger than 2000*2048 it won't split it into 2000 pieces, but use a max of 2048 as blocksize.
Shouldn't this be:
return max(file_blocksize, 2048L)
--
Sander
Changed in duplicity: | |
importance: | Undecided → Medium |
milestone: | none → 0.6.22 |
status: | New → Fix Committed |
Changed in duplicity: | |
status: | Fix Committed → Fix Released |
Hmm with a little more thought max() isn't very smart, there should a upperlimit.
So two alternatives:
- Perhaps bump the 2048 to say 4GB / 2000 bytes ? (that would handle dvd iso's)
- Make max_blocksize configurable, depending on the wish of better delta size or delta generation speed