I also incurred into this problem. Very large files become a bottleneck to the point that performing a backup becomes basicly impossible if there's any file larger than a couple of GB.
As suggested, I made the following patch, introducing --max-blocksize for both duplicity and rdiffdir (defaulting to 2048 like before).
I can now saturate my server I/O by using --max-blocksize 16777216, I don't mind the increased delta.
It's only lacking proper documentation.
I also incurred into this problem. Very large files become a bottleneck to the point that performing a backup becomes basicly impossible if there's any file larger than a couple of GB.
As suggested, I made the following patch, introducing --max-blocksize for both duplicity and rdiffdir (defaulting to 2048 like before).
I can now saturate my server I/O by using --max-blocksize 16777216, I don't mind the increased delta.
It's only lacking proper documentation.