Wrong destination storage size detected

Bug #1907138 reported by mc
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Rapid Photo Downloader
Invalid
Undecided
Unassigned

Bug Description

I use the application in an ubuntu container deployed on QNAP NAS, to download photos from SD card directly to the NAS disks. So far everything worked perfectly with a very ancient version of Rapid Photo Downloader. Now I updated things, moved to a new container and installed new Rapid Photo, which has some issues.

The Rapid Photo Downloader does not detect the size of the destination storage correctly. I do have a few terabyte disks, they are in the mirror and I can use them, upload on them, etc. The application detects only 16Mb free from 17Mb - see attached screenshot. This allows me to download a photo. When it is finished I can do the next one, and next one, and next one.....

Where does this come from? It seems that the disk sizes for the application are not correctly gathered. In the container, there are mounts for all shares from NAS and they appear in /nas_share folder. This folder is mounted as at /.share point. The `df` also says that this `/.share` has only 16MB, but all the folders inside are new/different mounts. As it is possible to directly download into the container I tried to trick the application with a symbolic link of one folder which is on NAS disks. This actually crashes the directory tree listing in the Rapid Photo Downloader.

Currently, the only option is to upload something to the container, and then move at the final destination.

Revision history for this message
mc (matic-cankar) wrote :
Revision history for this message
Damon Lynch (dlynch3) wrote :

You answered your own question when you use the command df. No application can magically determine prior to download what the available storage space is when it doesn't know which subfolders precisely the files will be downloaded to. All it can go with is the size of the volume the destination is located in, which 99.9% of the time will be the same volume as its subfolders.

The code in question is found here:

https://github.com/damonlynch/rapid-photo-downloader/blob/main/raphodo/storage.py#L1721

It's using standard system calls. Nothing fancy.

The first thing you need to do is set up your NAS / container so that the actual size of the storage medium used is correctly reported. Download to a destination folder *inside* the actual destination storage medium, not the mount point in which it is residing:

/media * DO NOT CHOOSE THIS *
/media/photo_destination_storage/photos * CHOOSE SOMETHING LIKE THIS *

Revision history for this message
mc (matic-cankar) wrote :
Download full text (3.4 KiB)

Thanks for your quick response! That's one of the fastest supports ever ;)

Here is where I face a problem. I'm not sure if I want to change something on this container as this is a default way, everything generated and changes may cause some other problems. The QNAP gives you a "Linux Station" which creates a Linux container for you and then you can use HDMI of NAS to connect it directly to the monitor or TV... (see https://www.qnap.com/en/product/ts-453a)

However, how this actually works I'm a bit confused. From inside the container, you can access all NAS shared folders through root folder /nas_share/<shared folders>, and in my case, my path would be for example /nas_share/<shared space>/Media/Pictures.

What it is strange here that there is no mount point for /nas_share, but I found that there is also a hidden "/.share" folder which includes all stuff from /nas_share/ and more. All those generated disks are here with strange mapping names:

:/nas_share$ df -h | grep share

tmpfs 16M 4.0K 16M 1% /.share
tmpfs 16M 0 16M 0% /.share/NFSv=4
/dev/mapper/cachedev2 4.0T 2.8T 1.2T 71% /.share/CACHEDEV2_DATA
/dev/mapper/cachedev1 2.8T 1.7T 1.2T 59% /.share/NFSv=4/Public
tmpfs 48M 64K 48M 1% /.share/CACHEDEV1_DATA/.samba/lock/msg.lock
/share/CACHEDEV1_DATA/.__eN__BACKUPS 2.8T 1.7T 1.2T 59% /.share/CACHEDEV1_DATA/BACKUPS
tmpfs 64M 1.1M 63M 2% /.share/CACHEDEV3_DATA/.qpkg/CodexPack/tmp
none 7.8G 0 7.8G 0% /.share/CACHEDEV3_DATA/.qpkg/CodexPack/sys/fs/cgroup
tmpfs 7.8G 0 7.8G 0% /.share/CACHEDEV3_DATA/.qpkg/CodexPack/run
none 7.8G 0 7.8G 0% /.share/CACHEDEV3_DATA/.qpkg/CodexPack/run/shm
none 7.8G 0 7.8G 0% /.share/CACHEDEV3_DATA/.qpkg/CodexPack/run/lock
none 7.8G 0 7.8G 0% /.share/CACHEDEV3_DATA/.qpkg/CodexPack/run/user
hdsfusemnt 290M 282M 8.6M 98% /.share/CACHEDEV3_DATA/.qpkg/MediaSignPlayer/CodexPackExt/share
none 7.8G 0 7.8G 0% /.share/CACHEDEV3_DATA/.qpkg/MediaSignPlayer/CodexPackExt/sys/fs/cgroup
tmpfs 7.8G 0 7.8G 0% /.share/CACHEDEV3_DATA/.qpkg/MediaSignPlayer/CodexPackExt/run
none 7.8G 0 7.8G 0% /.share/CACHEDEV3_DATA/.qpkg/MediaSignPlayer/CodexPackExt/run/shm
none 7.8G 0 7.8G 0% /.share/CACHEDEV3_DATA/.qpkg/MediaSignPlayer/CodexPackExt/run/lock
none 7.8G 0 7.8G 0% /.share/CACHEDEV3_DATA/.qpkg/MediaSignPlayer/CodexPackExt/run/user
/dev/sde1 1.4T 1.2T 207G 86% /.share/external/DEV3303_1
/.share 16M 4.0K 16M 1% /share2
/dev/sdg1 15G 6.9G 8.0G 47% /.share/external/DEV3301_1

Apparently, I would be able to access the same destination u...

Read more...

Revision history for this message
Damon Lynch (dlynch3) wrote :

You can specify the destination path using the command line. Run rapid-photo-downloader --help from a terminal for details on how to do this.

For more discussion, I suggest opening a discussion thread here: https://discuss.pixls.us/
with the topic "How do I setup a QNAP Linux container on my NAS for use with Rapid Photo Downloader?"

And then go into as much detail as you can regarding what you are trying to achieve, and the limitations you face.

Probably there is somebody in the Pixls community who knows a lot about setting up a NAS Linux Container to suit your needs.

Meanwhile, I will mark this bug as invalid. Not because you are not facing a problem, but because if there is something specific that needs to be changed in the Rapid Photo Downloader code, that will require a different and very specific bug report.

Changed in rapid:
status: New → Invalid
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.