Multi backend should offer mirror option

Bug #1474994 reported by Thomas Harning
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Duplicity
Fix Released
Medium
Unassigned

Bug Description

Using duply with a 'multi' backend in its current stage means that if any of the providers stop working that your backups can be rendered useless.

In my current setup I use the 'post' operation to effectively emulate what multi would do if it were to use mirroring (ex: use rsync to davfs mounts for some providers and megasync to push to mega).

Looking at the source, it doesn't look like it should be too hard to implement, effectively we would make the following behavior changes:

In _put - drop usage of the write cursor and for each store perform the put command. Behavior on failure could reasonably be handled as fail or skip - I imagine this should be configurable...

In _get - try each source until one succeeds in downloading, probably keeping a read cursor so that the 1st one that succeeds will be re-used next time.

In _list - try each source and store all names in a set to prevent duplicates.

In _delete - perform the delete on each source, _not_ returning after first successful deletion.

I imagine that in the parsed url, the path would still be used to point to the JSON file, since the JSON file has a list at the root, adding the 'stripe' vs 'mirror' metadata should probably be put as a query parameter... with a default to stripe due to legacy behavior.
Ex:
mirror:///path/to/config.json?mode=stripe

For behavior on upload failure, we can add another parameter:
Ex: mirror:///path/to/config.json?mode=mirror&onfail=abort

Options for onfail would be:
 * abort - stop uploading and report the failure up as-is
 * cleanup - stop uploading and delete the file from all successful uploads and report the failure up as-is ... this can help manage junk files showing up... although it doesn't answer management for other files that failed to upload
 * continue - continue uploading to other repositories

Further enhancements _could_ be to perform the upload to each backend simultaneously, but that depends on the backend and multi-thread safety...

If the workload is high for the project and others are interest in this - I could implement at least the base configuration in the multibackend.py module

Related branches

Revision history for this message
Thomas Harning (harningt) wrote :

I have a work-in-progress patch.. however have run into the issue that the way that duply is calling out to duplicity, the '&' in the TARGET causes it to fork in the background!

tags: added: patch
Revision history for this message
Thomas Harning (harningt) wrote :

Regarding the '&' in TARGET causing a fork - that was due to me running a too-old version of duply.

Revision history for this message
Thomas Harning (harningt) wrote :

Any status update on this?
Any concerns that should be addressed?

Revision history for this message
Kenneth Loafman (kenneth-loafman) wrote : Re: [Bug 1474994] Re: Multi backend should offer mirror option

Do you have a merge proposal? Have you updated the man page and the help
text? Is the merge up-to-date with the latest code?

On Fri, Feb 12, 2016 at 3:47 PM, Thomas Harning <email address hidden> wrote:

> Any status update on this?
> Any concerns that should be addressed?
>
> --
> You received this bug notification because you are subscribed to
> Duplicity.
> https://bugs.launchpad.net/bugs/1474994
>
> Title:
> Multi backend should offer mirror option
>
> Status in Duplicity:
> New
>
> Bug description:
> Using duply with a 'multi' backend in its current stage means that if
> any of the providers stop working that your backups can be rendered
> useless.
>
> In my current setup I use the 'post' operation to effectively emulate
> what multi would do if it were to use mirroring (ex: use rsync to
> davfs mounts for some providers and megasync to push to mega).
>
> Looking at the source, it doesn't look like it should be too hard to
> implement, effectively we would make the following behavior changes:
>
> In _put - drop usage of the write cursor and for each store perform
> the put command. Behavior on failure could reasonably be handled as
> fail or skip - I imagine this should be configurable...
>
> In _get - try each source until one succeeds in downloading, probably
> keeping a read cursor so that the 1st one that succeeds will be re-
> used next time.
>
> In _list - try each source and store all names in a set to prevent
> duplicates.
>
> In _delete - perform the delete on each source, _not_ returning after
> first successful deletion.
>
> I imagine that in the parsed url, the path would still be used to point
> to the JSON file, since the JSON file has a list at the root, adding the
> 'stripe' vs 'mirror' metadata should probably be put as a query
> parameter... with a default to stripe due to legacy behavior.
> Ex:
> mirror:///path/to/config.json?mode=stripe
>
> For behavior on upload failure, we can add another parameter:
> Ex: mirror:///path/to/config.json?mode=mirror&onfail=abort
>
> Options for onfail would be:
> * abort - stop uploading and report the failure up as-is
> * cleanup - stop uploading and delete the file from all successful
> uploads and report the failure up as-is ... this can help manage junk files
> showing up... although it doesn't answer management for other files that
> failed to upload
> * continue - continue uploading to other repositories
>
> Further enhancements _could_ be to perform the upload to each backend
> simultaneously, but that depends on the backend and multi-thread
> safety...
>
> If the workload is high for the project and others are interest in
> this - I could implement at least the base configuration in the
> multibackend.py module
>
> To manage notifications about this bug go to:
> https://bugs.launchpad.net/duplicity/+bug/1474994/+subscriptions
>

Changed in duplicity:
importance: Undecided → Medium
milestone: none → 0.7.07
status: New → Fix Committed
Changed in duplicity:
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.