Only sync relevant remote metadata to prevent big cache
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Duplicity |
New
|
Undecided
|
Unassigned |
Bug Description
Duplicity version: 0.7.13.1
Python version: 2.7.12
OS Distro and version: Ubuntu 16.04.2
Type of target filesystem: Multiple
Hello,
this is a follow up of http://
I've been using duplicity to backup since 2014 with a full backup every 3 months, so I have some secondary backup chains. Additionally backups go to multiple separate places (each having multiple backup chains). My local cache folder is now over 70gb big and I had the idea to delete the cache for the secondary chains to free up space - since they are only needed when restoring files, but not for new backups (to the primary chains).
So I deleted the cache files (for example duplicity-
But after the next backup run, I saw this in the logs:
Synchronizing remote metadata to local cache...
Copying duplicity-
Copying duplicity-
Copying duplicity-
Copying duplicity-
...
My current workaround is: Before I create a new full backup, I move the remote backup files to a new location. So for duplicity it looks like the backup location is fresh and it even deletes the local cache automatically, starting with a full backup (which I wanted anyways).
If I later want to restore from that backup I will have to move the files back into place on that remote.
It would be great to have a command line option to prevent syncing all metadata but the current one. Maybe it could even determine which metadata is needed for the current operation and only sync this one, so for example if I wanted to restore from a secondary backup chain it would only sync the secondary backup chain's metadata. So if metadata was deleted manually but not needed for the current operation, it would stay deleted.