Uploading full signatures for large backup set fails with B2 backend

Bug #1544707 reported by Maakuth
12
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Duplicity
In Progress
Medium
Kenneth Loafman

Bug Description

I have a backup set of around 450GiB. Uploading it to B2 went fine, but at the end the backing up wasn't successful. I've tied it multiple times, but it always comes down to the same issue:

...
Processed volume 17633
Writing duplicity-full-signatures.20151212T092152Z.sigtar.gpg
Attempt 1 failed. SSLError: ('The read operation timed out',)
Writing duplicity-full-signatures.20151212T092152Z.sigtar.gpg
Attempt 2 failed. SSLError: ('The read operation timed out',)
Writing duplicity-full-signatures.20151212T092152Z.sigtar.gpg
Attempt 3 failed. SSLError: ('The read operation timed out',)
Writing duplicity-full-signatures.20151212T092152Z.sigtar.gpg
Attempt 4 failed. SSLError: ('The read operation timed out',)
Writing duplicity-full-signatures.20151212T092152Z.sigtar.gpg
^NGiving up after 5 attempts. URLError: <urlopen error [Errno 104] Connection reset by peer>

The file in question is around 3.4GiB. I'm thinking there's some special treatment one should do when uploading such big files to B2, but I'm not sure. I think B2 is supposed to support files up to 5GB.

Duplicity 0.7.06
Python 2.7.9
Debian 8.3 "Jessie"
Backing up from an XFS volume to Backblaze B2

I'll submit more log lines once I've captured them to a file.

Tags: b2
Revision history for this message
Kenneth Loafman (kenneth-loafman) wrote :

In version 0.8 duplicity will split the sig files to be the same as volsize.

For now, you can shrink the sig file size down by specifying --max-blocksize=20480. I'd also suggest setting --volsize=100 to decrease the number of difftar files.

Changed in duplicity:
milestone: none → 0.8.00
importance: Undecided → Medium
assignee: nobody → Kenneth Loafman (kenneth-loafman)
status: New → In Progress
Markus (mstoll-de)
tags: added: b2
Revision history for this message
Hannes Tismer (hannes.tismer) wrote :

Hey,

This approach is a bit overkill for fixing the B2 behaviour.

B2 supports the `b2_start_large_file` command, which allows up to 10 TB per file. It just has to be used when the metadata file exceeds 5 GB in size.

More info: https://www.backblaze.com/b2/docs/large_files.html

I'd love to use this instead of splitting metadata archives up.

Revision history for this message
trunet (wsartori) wrote :

try using this backend, should fix the problem:
https://bugs.launchpad.net/duplicity/+bug/1654756

Changed in duplicity:
milestone: 0.8.00 → 0.8.01
Changed in duplicity:
milestone: 0.8.01 → none
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.