Backup to S3 seems rather slow

Bug #371482 reported by David Rahrer
This bug report is a duplicate of:  Bug #401094: Slow backup speed. Edit Remove
8
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Duplicity
Confirmed
Wishlist
Unassigned
Déjà Dup
Confirmed
Wishlist
Unassigned

Bug Description

My test backups to Amazon S3 seem really slow in comparison to, say, JungleDisk. I have about 1.5 MB/s actual upload (cable modem) ability, and it takes about 5 minutes to upload just shy of 40 MB. If I figured correctly, that's a bit over 60 hours for 30 GB. Is this similar to others? I realize that once it has been uploaded that only the difference will be backed up thereafter, but I want to make sure there isn't something I can check to make this run faster.

Note: this was with encryption on, but without it is still quite slow.

--
2x AMD Athlon(tm) 64 X2 Dual Core Processor 4400+
2 GB RAM
Ubuntu 9.04
Deja-Dup 9.1
Duplicity 0.5.16-0Jaunty3

Revision history for this message
David Rahrer (david-rahrer) wrote :
Revision history for this message
Michael Terry (mterry) wrote :

It is very slow, but I'm not clear on the causes. This might be a good question for the duplicity maintainer, Ken.

Changed in deja-dup:
importance: Undecided → Wishlist
status: New → Confirmed
Revision history for this message
Henry Ludemann (misc-hl) wrote :

From a quick look in the deja-dup source, can you confirm that deja-dup isn't using the '--archive-dir' option? If so, that may be the cause of a speed hit, as signature files will need to be re-downloaded every backup.

Currently duplicity doesn't cache / archive signature files if it has to request them from the server; only when starting from a fresh backup is this flag (currently) useful.

Revision history for this message
Michael Terry (mterry) wrote :

Deja Dup does not use the --archive-dir option. I intend to add support for it, just haven't gotten to it. So that is a little bit slower, but I doubt that would be much of a percentage of the time of a backup.

Revision history for this message
Henry Ludemann (misc-hl) wrote :

It depends; for my 1.5 gig worth of backup on amazon (using duplicity directly) it has to download 50Mb worth of data (no matter what I have to upload).

Changed in duplicity:
assignee: nobody → Kenneth Loafman (kenneth-loafman)
importance: Undecided → Medium
status: New → In Progress
Revision history for this message
Peter Schuller (scode) wrote :

Does not the speed in the original post indicate roughly 1 mbit bandwidth, meaning not that far off the 1.5 mbit theoretical bandwidth reported? Or was that really supposed to be 1.5 mBYTE (I suspected mbit since it was a cable modem).

In any case, one thing you can do if you're in Europe is to use European buckets (see --s3-european-buckets and --s3-use-new-style). Recently I've found european S3 to saturate my upstream (10-15 mbit), though in the past it has not been quite as fast as one would suspect.

Also, regarding the comparison to Jungledisk, are you using concurrent uploading support (which, IIRC, Jungledisk supports)? If so that can easily account for performance differences.

While I've successfully used the S3 backend with concurrent (more than just 1) uploads, I'm not sure whether it is actually safe. I dropped the ball on my planned conversion of these things to be concurrency-safe...

Changed in duplicity:
status: In Progress → Confirmed
importance: Medium → Wishlist
assignee: Kenneth Loafman (kenneth-loafman) → nobody
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.