gvfs smb / cifs file copy performance is terribly slow

Bug #1236619 reported by Forest
84
This bug affects 18 people
Affects Status Importance Assigned to Milestone
gvfs (Ubuntu)
Confirmed
Undecided
Unassigned

Bug Description

Copying moderate-size files to a samba share using gvfs is ridiculously slow compared to doing the same via mount.cifs. For example:

$ ls -sh testfile
1.2G testfile
$ time cp testfile ~/mnt/share-gvfs/
real 2m37.053s
user 0m0.056s
sys 0m5.120s
$ time cp testfile ~/mnt/share/
real 0m26.134s
user 0m0.004s
sys 0m1.724s

I'm running Xubuntu 13.04 (raring) amd64, gvfs 1.16.1-0ubuntu1.1.

Revision history for this message
Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in gvfs (Ubuntu):
status: New → Confirmed
Revision history for this message
_dan_ (dan-void) wrote :

I can confirm this.

Forest (foresto)
summary: - gvfs smb/cifs file copy performance is terribly slow
+ gvfs smb / cifs file copy performance is terribly slow
Revision history for this message
phil (fongpwf) wrote :

There a numerous reports of similar problems in various forums. Examples:
http://www.overclockers.com/forums/showthread.php?t=736989
http://ubuntuforums.org/showthread.php?t=2062335

Revision history for this message
Marcello Romani (marcello-romani) wrote :

I have experienced this bug on two Ubuntu desktop 12.04.5 x64 systems, copying to a debian 7 / samba 3.6 share.
I haven't tried other OS versions though.

Steps to reproduce:

Open remote share so it gets gvfs-mounted under ~/.gvfs
cd ~
cd .gvfs/path/to/remote/samba/share
cp ~/huge_file .

Speed: roughly 5.5 Mbyte/s

sudo mount /remote/samba/share /mnt
cp ~/huge_file /mnt/

Speed: roughly maxes out wire.

Revision history for this message
Marcello Romani (marcello-romani) wrote :

Oddly enough, copying ~/huge_file to the same remote samba share _with Nautilus_ maxes out the wire too.

So it appears to be an issue with cp and gvfs.

Revision history for this message
François-Xavier Thomas (fx-thomas) wrote :

I can confirm this too, both with nautilus and cp. At work we recently switched to the latest LTS (14.04) but the same behavior was consistently observed on multiple Ubuntu computers running both 14.04 and 12.04 when communicating with our NAS.

  Windows 7/8 -> NAS : 50-60MB/s
  Ubuntu -> NAS (GVFS) : 5-10MB/s
  Ubuntu -> NAS (mount.cifs) : 50-60MB/s

No need to mention, we obviously use /etc/fstab on most machines now, but I'd love to be able to have those speeds without having to fiddle with the configuration every time we get a new computer.

Revision history for this message
JensK (jenskh) wrote :

I can confirm this issue but with at twist.
If i manually make a commandline mount of a cifs share and use Nautilus for copying transfer speed is ok
If i mount from within Nautilus through "Browse network" and use Nautilus for copying transfer speed is terrible slow
I use Ubuntu Trusty 14.04.1

Revision history for this message
Forest (foresto) wrote :

JensK, assuming that your manual command line mount uses mount.cifs, that is not a twist. It's the same thing that the rest of us are seeing when we don't use gvfs.

Revision history for this message
Remmilou (remmilou) wrote :

Not sure, but this may be of help (at least for finding the cause):
I use Debian and experienced slow transfer and eventually hangs while copying large amonts both on CIFS and iSCSC (on the same server), with cp, rsync and Nautilus copy. But... with gnome-commander it runs smoothly.
In my situation it seems to be the speed of the server. With small amounts it's OK and 60 - 70 MB/s. With large amounts it started at the same speed, but then dropped, eventually to almost zero. Gnome-commander did about 30 MB/s, but carried on.
And now... mounting the server with the "sync" option gave the stability to cp, rsync. 30MB/s, but stable.
At my work, I compared a fast and a slow server. It's no surprise that the slow server did not collapse.
So my conclusions:
Slow servers probably get overloaded with the (default) nosync option. Th buffers just cannot handle the large amounts.
Adapt your options to the server speed and your needs.

Greetz,
Remco Siderius
Amsterdam, NL

Revision history for this message
Max (shad-dovv) wrote :

Changing one config in /etc/nsswitch.conf appears to have fixed my issue with this.

hosts: files wins mdns4_minimal [NOTFOUND=return] dns

I did not discover this. Credit where credit is due.
http://charlieharvey.org.uk/page/slow_nautilus_browse_with_netbios

Revision history for this message
Woonjas (woonjas) wrote :

This problem still exists in 16.04 64-bit as well, and has been around for about 7 years

https://bugs.launchpad.net/ubuntu/+source/gvfs/+bug/854959

This is a serious mark against switching to Linux, the mount -t cifs option is nice for power users, but not for not so much tech-savy users.

Revision history for this message
Josef Mašek (masek2050) wrote :

In my case, in 16.04 gvfs was same fast as mount -t cifs, but now in 18.04 I have gvfs about 9 times slower than mount (5 MB/s GVFS, 44 MBit/s mount and is probably disk limit now)

Revision history for this message
Josef Mašek (masek2050) wrote :

Problem in my case found - I am connected to the same network by the 1 Gigabit copper interface and by the wireless. When I used mount, traffic wen through 1 Gbit copper, when I used gvfs, traffic went by Wireless (~100 Mbit)

Revision history for this message
Jean-Pierre van Riel (jpvr) wrote :

Hi, to report, also noticed gvfs smb mount being over 10x slower in many cases depending on the read IO pattern. Seems GVFS alls a lot of latency. Summary of benchmark via fio:

GVFS SMB:
=========

name type size block_size latency_(ms) bandwidth_(kb) IOPS
---- ---- ---- ---------- ------------ -------------- ----
4k_4k_seq_read read 4k 4k 7.597604 100 25
512k_4k_seq_read read 512k 4k 12.608847148437 312 78.001219
4m_64k_seq_read read 4m 64k 58.59852259375 1082 16.91779
8m_256k_seq_read read 8m 256k 252.2923598125 1010 3.948667
16m_1m_seq_read read 16m 1m 1079.9320893125 946 0.924481

CIFS:
=====

name type size block_size latency_(ms) bandwidth_(kb) IOPS
---- ---- ---- ---------- ------------ -------------- ----
4k_4k_seq_read read 4k 4k 7.226365 250 62.5
512k_4k_seq_read read 512k 4k 0.35290271875 9481 2370.37037
4m_64k_seq_read read 4m 64k 4.14038503125 15003 234.432234
8m_256k_seq_read read 8m 256k 17.80018796875 14148 55.267703
16m_1m_seq_read read 16m 1m 83.7329983125 12145 11.860638

Bash script to test this (depends on fio, jq and column)

#json_benchmark_result_summary=''
tempfile=$(mktemp /tmp/read_test.XXXXXXXX.ndjson)
echo "# Using temp file: $tempfile"

function seq_read_benchmark() {
  local total_size="$1"
  local block_size="$2"
  echo "# Testing sequential read of $total_size size in $block_size."
  sleep 1
  fio --name="${total_size}_${block_size}_seq_read" --filename="${total_size}_${block_size}_seq_read_test.bin" --rw=read --iodepth=1 --max-jobs=1 --size="$total_size" --bs="$block_size" --output-format=json >> "$tempfile"
  rm "${total_size}_${block_size}_seq_read_test.bin"
}

seq_read_benchmark 4k 4k # 1 block read
seq_read_benchmark 512k 4k # 128 x 4k blocks read
seq_read_benchmark 4m 64k # 64 x 64k blocks read
seq_read_benchmark 8m 256k # 32 x 128k blocks read
seq_read_benchmark 16m 1m # 16 x 1m blocks read

# Reshaping JSON with jq: https://programminghistorian.org/en/lessons/json-and-jq#output-a-csv-csv
# How to format a JSON string as a table using jq?: https://stackoverflow.com/a/39144364/5472444
jq -s -r '(["name","type","size","block_size","latency_(ms)","bandwidth_(kb)","IOPS"] | (., map(length*"-"))), (.[] | .jobs[] | [.jobname, ."job options".rw, ."job options".size, ."job options".bs, (.read.lat_ns.mean/1000000), .read.bw, .read.iops]) | @tsv' "$tempfile" | column -t

# Convert JSON lines to JSON array using jq: https://stackoverflow.com/a/61867230/5472444
#jq -s '[.[] | .jobs[] | {name: .jobname, type: ."job options".rw, size: ."job options".size, "block size": ."job options".bs, "latency (ms)": (.read.lat_ns.mean/1000000), "bandwidth (kb)": .read.bw, IOPS: .read.iops}]' "$tempfile" > read_test_summary.json

# Cleanup
rm "$tempfile"

Revision history for this message
Jean-Pierre van Riel (jpvr) wrote :

Apologies, launchpad didn't keep the neatly tab separated column format, but hopefully the script can help others to test.

Attached png of side-by-side comparison of script output.

Only the latency for accessing a single 4K block doesn't seem to differ too much, but for every other mult-block sequential IO read, in terms of latency, bandwidth and IOPS, GVFS is over 10 times slower than CIFS.

While I know fuse mounts (userspace filesystems) will be understandably slower than kernel space mounts, more than 10 times slower is significant and indicates there's inefficiencies / room for improvement.

Revision history for this message
Sebastien Bacher (seb128) wrote :

@Jean-Pierre, thank you for the details

could you perhaps report those directly upstream on
https://gitlab.gnome.org/GNOME/gvfs/issues

Ubuntu doesn't have a dedicated maintainer for gvfs and the issue would have better chance to be considered if reported directly to the people writting the software

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.