Duplicity deletes contents of cache on S3 network error
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Duplicity |
New
|
Undecided
|
Unassigned |
Bug Description
Duplicity version: 0.6.18
Python version: 2.6.6
Distribution: Debian 6.0.7 ("squeeze")
I've had this happen a few times, I think, but only realised what the issue was on the most recent attempt. If my scheduled Duplicity backup runs during a network outage, it looks like it fails to resolve the bucket DNS entry, concludes the bucket does not exist, and tries to create it. Unfortunately, before trying this, it deletes every file in the cache due to being not present on the backend. My cron output looks something like this (with my details censored out):
Deleting local /home/[
[... repeated for every file in the cache ...]
Last full backup date: none
Last full backup is too old, forcing full backup
Failed to create bucket (attempt #1) '[CENSORED]' failed (reason: gaierror: [Errno -2] Name or service not known)
Unfortunately I have an S3 lifecycle rule set up to transition objects to Glacier that are older than a certain age, so recovering from this requires restoring all the manifests / signatures / etc. so that Duplicity can cache them again, which is a somewhat tedious and time-consuming manual process. I don't have -v9 output since this was an unattended automated job run from cron.
Hi, I'm using http:// bazaar. launchpad. net/~chameleona tor/duplicity/ s3-skip- glacier/ revision/ 968 to get around the transitioned glacier files in my restores. We test our backups monthly so we're only interested in the latest set that hasn't made it to glacier yet.