Infinite download loop for a single tree

Bug #1194085 reported by maksis
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
DC++
Confirmed
Medium
Unassigned

Bug Description

It goes as follows:

1. DC++ tries to save a downloaded tree in hash data but the data file isn't writable.
2. The client searches for other downloads from the same user and finds the same file again.
3. The client can't find the tree for that file and downloads it again (back to step 1.)

While the problem happens relatively rarely on Windows, it's much more common with Linux clients. Even though there's a log message when the writing fails, it seems that users don't generally notice it, which is why some users have even requested a feature for banning IP addresses from the client (and I've also seen users downloading the same tree from me for several hours).

 I've fixed this issue in AirDC++ 2.50 by catching the database exceptions in DownloadManager::endData, forwarding the error to the connecting item and changing the item to a state where it will connect only when the user manually forces the connection.

Tags: core
maksis (maksis)
description: updated
eMTee (realprogger)
Changed in dcplusplus:
status: New → Confirmed
importance: Undecided → Medium
Revision history for this message
eMTee (realprogger) wrote :

Instead of lying that the download is faliled and playing with the connections my proposal is trying to play safe and act according what's actually happening: the full tree is not available for whatever reason so it flags the source in the queue item, just like when they were unable to provide the full tree.
http://pastie.org/8225537

Revision history for this message
maksis (maksis) wrote :

How could this be source's fault? The only reason I see is that the client can't write into HashData. Flagging the source when the tree exist already seems a bit confusing, since the only thing that matters is that the tree exists in the database (maybe it got added from another source meanwhile).

Revision history for this message
eMTee (realprogger) wrote :

How could this be a failed download (as like in your solution)? Stopping the download when it could continue seems equally confusing. There's two solution here for a rare problem that is expected to go away soon (on Windows), at least at the next restart of the client so things go back to the normal way after.

Both solutions accomplish the goal to stop spamming but in a different way. Mines is simple and also allow to download smaller files while the problem's still existing. When the client is restarted the bad flag goes away since it's not saved in the queue so if there's no hashdata file lock anymore then things can continue in their normal way.

Completly stopping the download (yours) is another solution but I looked at your implementation and there's a big difference between DC++ and your code in this area (exception handling in HashManager, the ConnectionManager call) so if you'd like to push your method then please create a patch for DC++.

Revision history for this message
maksis (maksis) wrote :

I don't say that your solution is bad. I just wanted point out the issue with your implementation since it causes sources to get flagged as bad even if there was no exception (the tree exists already). Furthermore, I checked the code and doesn't the source get completely removed instead of only getting flagged if the file is larger than MAX_SIZE_WO_TREE?

What comes to my implementation, I don't believe that the problem goes away unless the user checks the issue with the file (it's a permission issue or similar).

Revision history for this message
eMTee (realprogger) wrote :

Yes, the source doesn't get completely removed instead of only getting flagged if the file is larger than MAX_SIZE_WO_TREE just like in case when there's no full tree provided.
Theoretically you could introduce a new flag for our case which would behave the same - I just don't think it worth it because the problem is the same: we cannot check the finished download against the full tree because it's not available at the time of the checking (cannot be read from the hash database since it's not saved). In my thinking these are similar enough cases to use this flag to solve the problem of flooding.

Fredrik Ullner (ullner)
tags: added: core
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.