I've run a few more backups which have gotten a bit further. The first one displayed the "backup location does not exist" error again, even though I'd emptied the target directory in order to get a fresh, full backup. As before, duplicity continued in the background and got to volume 9800 or so before my computer locked up.
Next, I cleared out about around 25 gigs of cruft in the hopes of getting through backups faster, leaving me with about 225 GB to back up. The following attempt got to volume 8931 but resulted in a brand new error:
Traceback (most recent call last):
File "/usr/bin/duplicity", line 1262, in <module>
with_tempdir(main)
File "/usr/bin/duplicity", line 1255, in with_tempdir
fn()
File "/usr/bin/duplicity", line 1228, in main
full_backup(col_stats)
File "/usr/bin/duplicity", line 417, in full_backup
globals.backend)
File "/usr/bin/duplicity", line 316, in write_multivol
(tdp, dest_filename)))
File "/usr/lib/python2.6/dist-packages/duplicity/asyncscheduler.py", line 145, in schedule_task
return self.__run_synchronously(fn, params)
File "/usr/lib/python2.6/dist-packages/duplicity/asyncscheduler.py", line 171, in __run_synchronously
ret = fn(*params)
File "/usr/bin/duplicity", line 315, in <lambda>
async_waiters.append(io_scheduler.schedule_task(lambda tdp, dest_filename: put(tdp, dest_filename),
File "/usr/bin/duplicity", line 241, in put
backend.put(tdp, dest_filename)
File "/usr/lib/python2.6/dist-packages/duplicity/backends/giobackend.py", line 141, in put
self.copy_file('put', source_file, target_file)
File "/usr/lib/python2.6/dist-packages/duplicity/backends/giobackend.py", line 70, in iterate
return fn(*args, raise_errors=False)
File "/usr/lib/python2.6/dist-packages/duplicity/backends/giobackend.py", line 126, in copy_file
% (target.get_parse_name(), n, e.__class__.__name__, str(e)))
NameError: global name 'n' is not defined
I've run a few more backups which have gotten a bit further. The first one displayed the "backup location does not exist" error again, even though I'd emptied the target directory in order to get a fresh, full backup. As before, duplicity continued in the background and got to volume 9800 or so before my computer locked up.
Next, I cleared out about around 25 gigs of cruft in the hopes of getting through backups faster, leaving me with about 225 GB to back up. The following attempt got to volume 8931 but resulted in a brand new error:
Traceback (most recent call last): duplicity" , line 1262, in <module> tempdir( main) duplicity" , line 1255, in with_tempdir duplicity" , line 1228, in main backup( col_stats) duplicity" , line 417, in full_backup backend) duplicity" , line 316, in write_multivol python2. 6/dist- packages/ duplicity/ asyncscheduler. py", line 145, in schedule_task run_synchronous ly(fn, params) python2. 6/dist- packages/ duplicity/ asyncscheduler. py", line 171, in __run_synchronously duplicity" , line 315, in <lambda> waiters. append( io_scheduler. schedule_ task(lambda tdp, dest_filename: put(tdp, dest_filename), duplicity" , line 241, in put put(tdp, dest_filename) python2. 6/dist- packages/ duplicity/ backends/ giobackend. py", line 141, in put copy_file( 'put', source_file, target_file) python2. 6/dist- packages/ duplicity/ backends/ giobackend. py", line 70, in iterate python2. 6/dist- packages/ duplicity/ backends/ giobackend. py", line 126, in copy_file get_parse_ name(), n, e.__class_ _.__name_ _, str(e)))
File "/usr/bin/
with_
File "/usr/bin/
fn()
File "/usr/bin/
full_
File "/usr/bin/
globals.
File "/usr/bin/
(tdp, dest_filename)))
File "/usr/lib/
return self.__
File "/usr/lib/
ret = fn(*params)
File "/usr/bin/
async_
File "/usr/bin/
backend.
File "/usr/lib/
self.
File "/usr/lib/
return fn(*args, raise_errors=False)
File "/usr/lib/
% (target.
NameError: global name 'n' is not defined
Attached are 500 lines of context