S3 multichunk support uploads too much data
Bug #885513 reported by
Michael Terry
This bug affects 1 person
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Duplicity |
New
|
Undecided
|
Unassigned |
Bug Description
If the volume being uploaded is not perfectly divisible by the S3 multichunk size, duplicity will end up uploading too much data.
This appears to be because of the following code in botobackend.py:
chunks = bytes / chunk_size
if (bytes % chunk_size):
...
for n in range(chunks):
params = {
...
To post a comment you must log in.
Also, you should be able to workaround this by adding --s3-multipart- chunk-size= 1000 to the command line.