Comment 6 for bug 1875937

Revision history for this message
Jack (jack-234521) wrote :

My incremental job uses the same parameters as the full, and it fails on glacier/deep because duplicity tries to pull the remote manifest and can't retrieve it - here's what I run:

duplicity /mnt/backup/src boto3+s3://BUCKETNAME/FOLDERNAME --file-prefix-archive archive-$(hostna
me -f)- --file-prefix-manifest manifest-$(hostname -f)- --file-prefix-signature signature-$(hostname -f)- --s3-use-deep-archive

...and here's the output:
Synchronizing remote metadata to local cache...
Copying signature-backup.HOSTNAME.com-duplicity-full-signatures.20200830T050003Z.sigtar.gpg to local cache.
Attempt 1 failed. ClientError: An error occurred (InvalidObjectState) when calling the GetObject operation: The operation is not valid for the object's storage class

It tries 4 more attempts and then fails completely.

I'm still not following the logic here anyways - wouldn't it be better to store the signature files as as a standard class file, and allow the metadata sync to proceed as usual...rather than just disabling metadata sync for non-standard storage classes? The implementation seems to be inconsistent across AWS storage classes, and I can't get incrementals to work at all with glacier deep.