Uploading full signatures for large backup set fails with B2 backend
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Duplicity |
In Progress
|
Medium
|
Kenneth Loafman |
Bug Description
I have a backup set of around 450GiB. Uploading it to B2 went fine, but at the end the backing up wasn't successful. I've tied it multiple times, but it always comes down to the same issue:
...
Processed volume 17633
Writing duplicity-
Attempt 1 failed. SSLError: ('The read operation timed out',)
Writing duplicity-
Attempt 2 failed. SSLError: ('The read operation timed out',)
Writing duplicity-
Attempt 3 failed. SSLError: ('The read operation timed out',)
Writing duplicity-
Attempt 4 failed. SSLError: ('The read operation timed out',)
Writing duplicity-
^NGiving up after 5 attempts. URLError: <urlopen error [Errno 104] Connection reset by peer>
The file in question is around 3.4GiB. I'm thinking there's some special treatment one should do when uploading such big files to B2, but I'm not sure. I think B2 is supposed to support files up to 5GB.
Duplicity 0.7.06
Python 2.7.9
Debian 8.3 "Jessie"
Backing up from an XFS volume to Backblaze B2
I'll submit more log lines once I've captured them to a file.
tags: | added: b2 |
Changed in duplicity: | |
milestone: | 0.8.00 → 0.8.01 |
Changed in duplicity: | |
milestone: | 0.8.01 → none |
In version 0.8 duplicity will split the sig files to be the same as volsize.
For now, you can shrink the sig file size down by specifying --max-blocksize =20480. I'd also suggest setting --volsize=100 to decrease the number of difftar files.