Multi backend should offer mirror option
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Duplicity |
Fix Released
|
Medium
|
Unassigned |
Bug Description
Using duply with a 'multi' backend in its current stage means that if any of the providers stop working that your backups can be rendered useless.
In my current setup I use the 'post' operation to effectively emulate what multi would do if it were to use mirroring (ex: use rsync to davfs mounts for some providers and megasync to push to mega).
Looking at the source, it doesn't look like it should be too hard to implement, effectively we would make the following behavior changes:
In _put - drop usage of the write cursor and for each store perform the put command. Behavior on failure could reasonably be handled as fail or skip - I imagine this should be configurable...
In _get - try each source until one succeeds in downloading, probably keeping a read cursor so that the 1st one that succeeds will be re-used next time.
In _list - try each source and store all names in a set to prevent duplicates.
In _delete - perform the delete on each source, _not_ returning after first successful deletion.
I imagine that in the parsed url, the path would still be used to point to the JSON file, since the JSON file has a list at the root, adding the 'stripe' vs 'mirror' metadata should probably be put as a query parameter... with a default to stripe due to legacy behavior.
Ex:
mirror:
For behavior on upload failure, we can add another parameter:
Ex: mirror:
Options for onfail would be:
* abort - stop uploading and report the failure up as-is
* cleanup - stop uploading and delete the file from all successful uploads and report the failure up as-is ... this can help manage junk files showing up... although it doesn't answer management for other files that failed to upload
* continue - continue uploading to other repositories
Further enhancements _could_ be to perform the upload to each backend simultaneously, but that depends on the backend and multi-thread safety...
If the workload is high for the project and others are interest in this - I could implement at least the base configuration in the multibackend.py module
Related branches
- edso: Approve
-
Diff: 293 lines (+174/-17)2 files modifiedbin/duplicity.1 (+55/-7)
duplicity/backends/multibackend.py (+119/-10)
tags: | added: patch |
Changed in duplicity: | |
importance: | Undecided → Medium |
milestone: | none → 0.7.07 |
status: | New → Fix Committed |
Changed in duplicity: | |
status: | Fix Committed → Fix Released |
I have a work-in-progress patch.. however have run into the issue that the way that duply is calling out to duplicity, the '&' in the TARGET causes it to fork in the background!