S3 multichunk support uploads too much data

Bug #885513 reported by Michael Terry
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Duplicity
New
Undecided
Unassigned

Bug Description

If the volume being uploaded is not perfectly divisible by the S3 multichunk size, duplicity will end up uploading too much data.

This appears to be because of the following code in botobackend.py:

            chunks = bytes / chunk_size
            if (bytes % chunk_size):
                chunks += 1
...
        for n in range(chunks):
            params = {
...
                'bytes': chunk_size,

Revision history for this message
Michael Terry (mterry) wrote :

Also, you should be able to workaround this by adding --s3-multipart-chunk-size=1000 to the command line.

Revision history for this message
Henrique Carvalho Alves (hcarvalhoalves) wrote :

The error is actually on the code that streams data from the file, it takes the wrong chunks. Please refer to #881070

Otherwise, the S3 backend will just make sure to divide volsize in N (+1) chunks of chunksize.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.