Comment 46 for bug 385495

Revision history for this message
andy (andy-bear) wrote :

Hello, I've spent hard time troubleshooting duplicity @Synology right now - may be it can help somebody else looking for a solution - and some clever DEV here could fix / workaround it ;-).

The latest Synology ipkg in this moment are:
- duplicity: 0.6.21
- rsync: 3.0.9 protocol version 30
- NcFTP: 3.2.4
- Python: 2.7.9

Running duplicity with FTP back-end on a big ( >500MB) source dir produces locally (read: not only remotely, so NcFTP is not the bottleneck) unusable archive: 1st run seems to be ok (although it's not: the sigtar is truncated at 2GB) and every following incremental backup re-creates endlessly another full backup set, holding another broken sigtar.

Now, the duplicity tells you on another INC backup, that it can't gpg decrypt the sigtar - it deletes the local cached sigtar, gets the encrypted one from remote, which has to be decrypted locally - and hence it fails, not having the gpg encryption key inside the script, which would be anyway useless as this sigtar from remote is anyway broken.

I'm not sure, where is the bottleneck: is it the Python or Duplicity internally? FTP should not matter, as it happens also locally (I deployed to the FTP remote my locally-created seed disk, where the sigtar was truncated @2GB already).

I've googled a lot and found the most wild theories, all wrong it seems; at the end as I realized there is always the 2GB boundary, I've found this forum.

As a workaround, the switch "max-sigtar" should be fine; but where is the real trouble with those 2GB? The local fs is ext4, so this can't be it. The Synology handles huge files also without any problems (e.g. some internal linux stuff/tools/libs). So it seems like Python/duplicity issue to me.

Another almost-free workaround could be: duplicity would backup always only as much in 1-pass, as long some argument-boundary (e.g. 2GB) would not be exceeded. So instead of implementing now some fancy complex sigtar splitting logic, duplicity would prevent that those files making sigtar grow too big, were simply ignored like "not there" - hence, it would produce "not-so-complete" FULL/INC backup set, which is just fine: it could warn the user like "please re-run to complete". One could simulate this behavior with using --exclude/include combinations, but hey, this is a killing exercise ;-/. Duplicity could handle this _very_ easily I guess; just stop processing the input if(size(sigtar)>$configed) and give some warning output.

Btw: duplicity -v9 --dry-run tells me always that the '.manifest.gpg is not part of a known set; creating new set' - but this seems not to influence the success/failure of the backup process. When there were no changes, it doesn't upload anything and when there were changes, it seems to upload only the needed stuff - but it leaves bad taste somehow.

Thank you for any input,
Andy