Manifests not equal because different volume numbers
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Duplicity |
Fix Released
|
Undecided
|
Unassigned |
Bug Description
I'm trying to set up remote encrypted backup to Amazon S3 on my Synology DS409+ NAS using duplicity. The backup completes without errors, but when I then try to run the backup again, I get the message:
Manifests not equal because different volume numbers
Fatal Error: Remote manifest does not match local one. Either the remote backup set or the local archive directory has been corrupted.
Either there is nothing wrong with the manifests and there is a bug in duplicity causing it to think so, or something went wrong with the backup but duplicity gave no indication of it.
I will attach my backup script and the ouput of the second run.
CPU: PowerPC
OS: Disk Station Manager 2.2 / Optware
Linux kernel version: 2.6.24
duplicity version: 0.6.05
python version: 2.6.4
Please let me know if I can provide additional information, or if there is some way I can determine whether it is the local archive or the remote backup set (or neither) which is corrupt.
Changed in duplicity: | |
status: | New → Fix Released |
I have this problem on 0.6.08b - I've posted on the lists but no response so far. I think I've figured out what causes the error here - the remote manifest file is 0 bytes. Recreating the manifest by piping the local unencrypted one through gpg fixed the error alhtough I'm not quite sure what the consequences are of doing that yet (still testing).
What I havnt figured out is what causes the manifest on the remote side to be 0 bytes, hopefully tonights backup might help.