Failing to download photos from CIFS

Bug #1682998 reported by Robert V.
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Rapid Photo Downloader
Fix Released
Undecided
Damon Lynch

Bug Description

I download photos from cards to the Synology NAS. The images are shared using AFP/SAMBA/CIFS and I process them from the shared folder. The folders are mounted like this:

//IP_ADDRESS/USBCopy on /mnt/photos_in type cifs (rw,nosuid,nodev,noexec,relatime,vers=1.0,cache=strict,username=me,domain=,uid=1000,forceuid,gid=1000,forcegid,addr=IP_ADDRESS,unix,posixpaths,serverino,mapposix,acl,rsize=1048576,wsize=1048576,echo_interval=60,actimeo=1,user=me)

When I try to download them from the share, RPD sees them, creates previews, but almost all images fail to copy with:

Unable to copy file DSC_7741.NEF
Error: 5 Input/output error
I don't quite understand the error, as the files are readable - I never had the problem using other tools, like Digikam, Darktable, even cp.

RPD displays error after some time and even stops the download (noticed while retrying the process - RPD is responding, there is PAUSE (and resume) available, but iotop displays no disk activity):

ERROR An unhandled exception occurred
ERROR Traceback (most recent call last):
  File "/home/robertvalik/.local/lib64/python3.5/site-packages/raphodo/rapid.py", line 2990, in copyfilesBytesDownloaded
    assert chunk_downloaded >= 0
AssertionError

I suspect that it may be caused by the cifs mount as RPD has no problem copying the files directly from the card reader.

RPD log is attached.

Any suggestions where I should look?

Revision history for this message
Robert V. (robertvalik) wrote :
Revision history for this message
Damon Lynch (dlynch3) wrote :

I have no experience a NAS. However clearly something is going badly wrong when attempting to copy from the NAS to your computer.

Rapid Photo Downloader is coded in Python, and uses Python's standard library to undertake copy operations. Those standard library operations like read and write are reporting an error. That means there is either a problem with the Python standard library (unlikely but possible I suppose) or with your setup (much more likely).

The fact that you can access the files using other tools may not mean much because the a problem with your setup may manifest only when a files are being rapidly accessed / copied, which is what Rapid Photo Downloader does: a single process copies files one after the other in rapid succession, with very minimal pause between copy operations.

I suppose another possibility is that when copying from a NAS, it is not designed to have files read from 1 MB at a time (which is the strategy Rapid Photo Downloader uses). If you can do some research as to whether this is a limitation when copying from a NAS I would appreciate that.

In any case, thanks to your bug report, I have made some changes to the code to better assist error diagnosis, as well as fixing the bug that triggered the assertion failure.

Changed in rapid:
status: New → Incomplete
assignee: nobody → Damon Lynch (dlynch3)
Revision history for this message
Robert V. (robertvalik) wrote :

You are absolutely right. I tried to reproduce this issue using something like:

for i in `ls`
do
    dd if=$i of=/tmp/$i bs=1M
done

Basically does the same thing as you describe - copies files in succession with 1MB buffer.

I got a lot of I/O errors - the numbers correlate with RPD error rate. I have never encountered an I/O error from the NAS with CIFS before and every other use scenario I have seems to work flawlessly.

Revision history for this message
Robert V. (robertvalik) wrote :

After enabling "Enable SMB 2 and Large MTU" on the Synology NAS and adding vers=2.0 to mount options in /etc/fstab, the transfer works OK.

Thank you in pointing me in the right direction.

Damon Lynch (dlynch3)
Changed in rapid:
status: Incomplete → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.