Duplicity 0.8.09-4 can't finish when webdav

Bug #1861639 reported by rasty huty on 2020-02-03
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Duplicity
Medium
Unassigned

Bug Description

When I'm trying to make big backup to Webdav location (~32GB) it always hangs in the end(when should be uploaded last backup) with

FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 0,
(I'am wait for 16h and starce spamming this message)

very simular with
https://bugs.launchpad.net/duplicity/+bug/1732269

with small backups (10MB) all ok
with Duplicity 0.7.19-2 big backups also working fine

I'm using 0.8.09-4 and python 3.8.1 and gpg (GnuPG) 2.2.19
volumes 200mb (webdav server allowing 2GB)

Webdav server log:
Feb 03 00:21:27 : PUT:http://****:10888/***/***/duplicity-inc.20200202T082541Z.to.20200202T211329Z.vol1.difftar.gpg:127.0.0.1:10888 - Finished processing (84425ms, HTTP result: 201)
Feb 03 00:25:32 Items cache clean: removed 188 expired items
and after this it hang with
FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 0,

OK, let's see what's going on... I need some info:

1) the v9 log file (first 200 lines, last 200 lines)
2) OS and version
3) memory available for duplicity

Changed in duplicity:
status: New → Incomplete
importance: Undecided → Medium
assignee: nobody → Kenneth Loafman (kenneth-loafman)
milestone: none → 0.8.11
rasty huty (rasty-crasty) wrote :

After I switched back from 0.7.19-2 to 0.8.09-4 it start working correctly again,
I changed only some "--exclude" paths (just added some cache/tmp folders)
May be RawDeltaSize not big. may be something else,
I think it can be closed for now, if I face with this issue again I will update this

Changed in duplicity:
assignee: Kenneth Loafman (kenneth-loafman) → nobody
importance: Medium → Undecided
milestone: 0.8.11 → none
status: Incomplete → Invalid
rasty huty (rasty-crasty) wrote :

Hello,

free space in /tmp ~8GB free RAM ~10GB
in log first 200 lines last 200 lines
but I removed tons of

NOTICE 16 0 73977 0 10 0 1
. 0.0KB 20:32:57 [0.0B/s] [> ] 0% ETA Stalled!
because backup can't finished more than 8 hours
also i removed sensitive data like username/ machine name

get error again it can't finish because can't upload last difftar.gpg
As I understand main error is:
=============================

. Backtrace of previous error: Traceback (innermost last):
. File "/usr/lib/python3.8/site-packages/duplicity/backend.py", line 374, in inner_retry
. return fn(self, *args)
. File "/usr/lib/python3.8/site-packages/duplicity/backend.py", line 534, in put
. self.__do_put(source_path, remote_filename)
. File "/usr/lib/python3.8/site-packages/duplicity/backend.py", line 520, in __do_put
. self.backend._put(source_path, remote_filename)
. File "/usr/lib/python3.8/site-packages/duplicity/backends/webdavbackend.py", line 452, in _put
. raise e
. File "/usr/lib/python3.8/site-packages/duplicity/backends/webdavbackend.py", line 440, in _put
. response = self.request(u"PUT", url, source_file)
. File "/usr/lib/python3.8/site-packages/duplicity/backends/webdavbackend.py", line 222, in request
. response = self.conn.getresponse()
. File "/usr/lib/python3.8/http/client.py", line 1322, in getresponse
. response.begin()
. File "/usr/lib/python3.8/http/client.py", line 303, in begin
. version, status, reason = self._read_status()
. File "/usr/lib/python3.8/http/client.py", line 264, in _read_status
. line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
. File "/usr/lib/python3.8/socket.py", line 669, in readinto
. return self._sock.recv_into(b)
. socket.timeout: timed out

==================================

Changed in duplicity:
status: Invalid → New
rasty huty (rasty-crasty) wrote :

just in case last lines from webdav server log, before "socket.timeout: timed out" error

Feb 08 00:08:07 mypcname dotnet[1152]: [15]: PUT:http://127.0.0.1:10888/mypcname/crypt/duplicity-inc.20200207T032603Z.to.20200207T205233Z.vol1.difftar.gpg:127.0.0.1:10888 - Finished processing (43181ms, HTTP result: 201)
Feb 08 00:12:26 mypcname dotnet[1152]: [16]: Items cache clean: removed 250 expired items

rasty huty (rasty-crasty) wrote :

should I try with big timeout like --timeout 172800 instead default --timeout 30 ?

rasty huty (rasty-crasty) wrote :

BTW OS ArchLinux (with latest update)
also can it connected with --asynchronous-upload ?

Changed in duplicity:
status: New → In Progress
importance: Undecided → Medium
assignee: nobody → Kenneth Loafman (kenneth-loafman)
milestone: none → 0.8.12

No need for a large timeout or async at this point, would muddy the water.

This is going to be a big log and I only need a portion. Please run with -v9, redirect to a log file, then run:

$ head -n 200 < log > log.head
$ tail -n 200 < log >log.tail

Attach log.head and log.tail to this bug report (don't copy/paste).

...Thanks

Changed in duplicity:
status: In Progress → Incomplete
rasty huty (rasty-crasty) wrote :
rasty huty (rasty-crasty) wrote :

Hello,

 I attached requested files, I hope it enough

rasty huty (rasty-crasty) wrote :
Changed in duplicity:
status: Incomplete → In Progress

Try again but remove --asynchronous-upload.

rasty huty (rasty-crasty) wrote :

@kenneth-loafman

after removing --asynchronous-upload. all ok (~20 incremental backups and 1 full)

rasty huty (rasty-crasty) wrote :

@kenneth-loafman
>>after removing --asynchronous-upload. all ok (~20 incremental backups and 1 full)
I mean that backup finished with some error as before, but it not hang anymore

I'm confused...

Same error as before? Socket timeout?

rasty huty (rasty-crasty) wrote :

@kenneth-loafman
>>Same error as before? Socket timeout
yes correct, Socket timeout, but it don't hang any more, also if I increase timeout it working fine (no socket timeout error)

OK, sounds like you found the problem. I could not reproduce it.

Changed in duplicity:
assignee: Kenneth Loafman (kenneth-loafman) → nobody
milestone: 0.8.12 → none
status: In Progress → Invalid
To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers