Paramiko backend: delete() always fails if --num-retries > 1
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Duplicity |
Fix Released
|
Medium
|
Unassigned |
Bug Description
Duplicity version: 0.6.21
Python version: 2.7.3
OS: Arch Linux
Target filesystem: Linux/ext4
The paramiko backend's delete() method always fails if --num-retries is set to something bigger than 1 (which it is by default). This means that actions like remove-
Explanation: delete() gets passed a list of files, which it successfully deletes. However, it then tries to delete all of them *again* ((num-retries - 1) times), which obviously fails since they have just been deleted.
I have linked a branch with a fix and attached a logfile.
Related branches
- duplicity-team: Pending requested
-
Diff: 41 lines (+13/-11)1 file modifiedduplicity/backends/_ssh_paramiko.py (+13/-11)
Changed in duplicity: | |
status: | New → Fix Committed |
Changed in duplicity: | |
importance: | Undecided → Medium |
milestone: | none → 0.6.22 |
Changed in duplicity: | |
status: | Fix Committed → Fix Released |
I have this issue, and found the same thing. I do a full every 30 days and clean up any older than 30 days. The problem with the default of --num-retries > 1 default is that we only get to clean up 1 day as we either error out or segfault (sometimes). We keep stacking up old backups as we don't remove more than 1 per backup run, but we are adding 1 backup per backup run for a daily run.
I have changed my script to use --num-retries 1, and now things work fine.
I am using:
Duplicity version 0.6.21
scp back end.
I have tried with --extra-clean --force and found that they are not enough to get the job done. If I do the clean-up on the disk where the files reside using the file back end, things work. This seems specific to the scp back end.
Here are examples I tried: source- mismatch --ssh-askpass remove-older-than 30D --extra-clean --no-encryption --force scp://user@ host//path/ host//filesyste m
duplicity --allow-
This would return to the shell with 50 for an error for the file not found, or 139 if the program segfaulted (happened sometimes, but not always).
If I added --num-retries 1 after the --extra-clean, it works fine.