gvfs performs slowly on bulk sftp transfers

Bug #250517 reported by Chris Jones
194
This bug affects 37 people
Affects Status Importance Assigned to Milestone
GLib
Fix Released
Medium
gvfs
Fix Released
Low
gvfs (Ubuntu)
Triaged
Low
Unassigned
Nominated for Intrepid by Paul
Nominated for Jaunty by Paul
Nominated for Karmic by Xvani
Nominated for Lucid by erp

Bug Description

Tested on a low-latency 100Mb fibre link. scp/sftp console clients are able to hit the line speed of around 10MB/s. Copying via Nautilus is almost 10 times slower, at about 1.6MB/s.
This seems to be because it does not batch up writes, but waits for a reply for each.

Changed in glib:
status: Unknown → Confirmed
Changed in glib2.0:
assignee: nobody → desktop-bugs
importance: Undecided → Low
status: New → Triaged
Revision history for this message
Nick Steeves (nick-0) wrote :

I'd like to confirm that this problem is even worse on a wireless network.

Revision history for this message
Nick Steeves (nick-0) wrote :

While not the same speed, I've noticed an improvement recently (using Intrepid). Is yours just as slow?

Revision history for this message
John McPherson (jrm+launchpadbugs) wrote :

ii gvfs 1.0.2-0ubuntu2
ii gvfs-backends 1.0.2-0ubuntu2
ii openssh-client 1:5.1p1-3ubuntu1

Copying a 1018MB file from an ubuntu 8.10 (Intrepid) desktop to a Debian 4.0 (Etch) server across our LAN (via 100Mbps ethernet):

scp command line is around 11.1MB/s (as reported by scp)

gvfs-copy to "sftp://" location: 8.6MB/s (as calculated by measuring the Sent bytes on the ethernet interface)

copying to the same location in nautilus by using "sftp://" location: 4.1MB/s (as reported by nautilus copy dialog) but 8.5MB/s as calculated by measuring Sent bytes on eth interface!

Copying a 400MB file from ubuntu 8.10 to debian 4.0 via 1000Mbps ethernet:

scp command line = 71MB/s

gvfs-copy = 31.9MB/s (measuring eth0 sent traffic)

nautilus = 30MB/s (measuring eth0 sent traffic), but copy dialog claims 14.6MB/s

So there are 2 problems - gvfs is significantly slower than scp for doing copies, and nautilus is incorrectly calculating the copying speed.

Revision history for this message
Sebastien Bacher (seb128) wrote :

how did you measure the traffic? just by counting the seconds for copy or my measure actual eth activity? it could be that nautilus generates a higher activity to copy the same datas (ie small slices and extra informations etc)

Revision history for this message
Nick Steeves (nick-0) wrote :

I've been using iptraf to measure network traffic. My current statistics are comparable with John's. Because gvfs-copy is also significantly slower than scp, I wonder if FUSE might also be part of this bug? No time for me to check right now, but is sshfs as slow as gvfs-copy?

Revision history for this message
Sebastien Bacher (seb128) wrote :

the fuse mounts are not used when using gvfs locations, note that you should compare to sftp and not scp which is a different way to do copies

Revision history for this message
Nick Steeves (nick-0) wrote :

I don't have two Intrepid machines to test this further, but I can test Hardy-to-Intrepid if it would help. With the exception of the "file operations dialogue", all line-speed measurements were taken with iptraf.

Wireless G @ 54M
sftp peak: 1229 KB/s. Average is about 1 MB/s
 sftp reports transfer speeds between 900 KB/s and 1.1 MB/s
gvfs-copy peak: 1174 KB/s. Average is about 900 KB/s
nautilus copy: Peak 12153 KB/s. Average is about 900 KB/s
 File operations dialogue reports transfer speed as between 700 and 800 KB/s

Conclusion: wireless isn't fast enough to properly test this. Transfer rates are in the same ballpark. Hardy and Intrepid have resolved the worst of Gutsy's gvfs problems, for wireless, at least. The file operations dialogue seems to use some sort of transfer speed averaging function to smooth out the fluctuations in speed, but it needs work. I haven't looked at the code, but I would imagine that making the sampling period smaller (time or bytes transferred...which one work work better, I'm not sure). Of course, CPU usage will go up if the sampling period is too small... Maybe keep the refresh rate of the transfer speed function the same, but have it look at the last MB transferred, instead of however it's doing it now (I suspect it's looking at more than just 1 MB)

At any rate, John, could you please test using sftp on your gigabit ethernet? I think 802.11g bandwidth is the bottleneck, for me -- which isn't so bad. ;-)

Sebastien, are fuse mounts really not used? I tried to gvfs-copy my sftp://URL, but it fails with: "Error copying file sftp://blah/blah: The specified location is not mounted". Does gvfs-copy not use ~/.gvfs, which is a mount of type "fuse.gvfs-fuse-daemon"? (for Hardy to Hardy. I haven't yet checked to see if Intrepid maintains this behaviour)

Revision history for this message
Nick Steeves (nick-0) wrote :

P.S. was there a major Hardy gvfs update between 2008-11-03 and 2009-02-13.

Revision history for this message
Xvani (fredrile+launchpad) wrote :

Still present in jaunty... Wtf?

Revision history for this message
Fabio Marzocca (thesaltydog) wrote :

I confirm this still present in Jaunty. It is painful.

Revision history for this message
furicle (furicle) wrote :

More data points - problem still exists in Jaunty AFAICS
This method should be easier to reproduce for most people rather than relying on iptraf etc.

I created a 120meg file called 'test' via /dev/urandom and dd, then tried comparing sftp client and regular cp over gvfs

    ~/.gvfs/sftp on HardyBox.local/home/me
 ==> time cp /tmp/test .

 real 3m15.006s
 user 0m0.116s
 sys 0m2.320s

 ==> time sftp -b batch me@HardyBox
 sftp> put test
 Uploading test to /home/me/test

 real 0m48.166s
 user 0m9.497s
 sys 0m13.497s

 That's over my 100baseT half duplex LAN, one end Jaunty one end Hardy.

Revision history for this message
Ryan Daly (daly-ctcnet) wrote :

I can also confirm this bug still exists in Jaunty.

I was transferring a large file via Nautilus and seeing speeds that maxed out at 1.5MB/s. I then resorted to command-line sftp and am seeing speeds of 10.8MB/s.

Revision history for this message
jnns (jnns) wrote :

Just noticed that this bug exists in Karmic as well.

Revision history for this message
Jamin W. Collins (jcollins) wrote :

Yes, still seeing this in Karmic, 64-bit. Getting 3-4MB/s maximum transfer rates when using Nautilus to copy the files to a remote server on the same LAN segment. The very strange bit is that I can sending multiple files at the same time through Nautilus results in each transfer reaching the 3-4MB/s transfer rate. So no individual transfer is maxing out the capabilities of the sender, receiver, or network.

Revision history for this message
Jamin W. Collins (jcollins) wrote :

Copy of details provided to the gnome-bugs BTS:

I believe the detail below will clearly show that gvfs is not maxing out the CPU. Note that a simultaneous transfer of two 1 gig files to the same destination takes roughly the same time as transferring a single file. Which is roughly the same amount of time it takes to transfer both files sequentially using sftp.

$ apt-cache policy openssh-client
openssh-client:
  Installed: 1:5.1p1-6ubuntu2
  Candidate: 1:5.1p1-6ubuntu2
  Version table:
 *** 1:5.1p1-6ubuntu2 0
        500 http://us.archive.ubuntu.com karmic/main Packages
        100 /var/lib/dpkg/status

$ apt-cache policy gvfs-bin
gvfs-bin:
  Installed: 1.4.1-0ubuntu1
  Candidate: 1.4.1-0ubuntu1
  Version table:
 *** 1.4.1-0ubuntu1 0
        500 http://us.archive.ubuntu.com karmic/main Packages
        100 /var/lib/dpkg/status

$ dd if=/dev/urandom bs=1M count=1000 of=test1.img
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 149.168 s, 7.0 MB/s

$ dd if=/dev/urandom bs=1M count=1000 of=test2.img
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 149.596 s, 7.0 MB/s

$ time sftp -b batch 192.168.10.21:
Changing to: /home/jcollins/
sftp> put test1.img
Uploading test1.img to /home/jcollins/test1.img

real 1m31.333s
user 1m9.490s
sys 0m5.320s

$ time gvfs-copy test1.img .gvfs/sftp\ on\ 192.168.10.21/home/jcollins/

real 3m2.778s
user 0m0.120s
sys 0m1.530s

$ time gvfs-copy test1.img .gvfs/sftp\ on\ 192.168.10.21/home/jcollins/ & time gvfs-copy test2.img .gvfs/sftp\ on\ 192.168.10.21/home/jcollins/
[1] 9095

real 3m18.182s
user 0m0.150s
sys 0m1.160s
[1]+ Done time gvfs-copy test1.img .gvfs/sftp\ on\ 192.168.10.21/home/jcollins/

real 3m18.187s
user 0m0.250s
sys 0m2.410s

$ time sftp -b batch-combined 192.168.10.21:
Changing to: /home/jcollins/
sftp> put test1.img
Uploading test1.img to /home/jcollins/test1.img
sftp> put test2.img
Uploading test2.img to /home/jcollins/test2.img

real 2m59.424s
user 2m17.820s
sys 0m10.260s

Revision history for this message
Greg Oliver (oliver-greg) wrote :

Is this ever gonna be fixed? It has been present in Intrepid all the way through Lucid now...

I assume it needs to be fixed upstream, but it really sucks..

4.1MB/s is my max with nautilus, get over 20 with terminal..

Revision history for this message
Resolution (norman-abi02) wrote :

hey there

in 1 GBit LAN it is about 9 MB/s with nautilus and 3 or 4 times faster using scp.

If you try this before copying

$ sudo cpufreq-selector -g performance

you'll get a slightly better performance using nautilus, but not that much.

This was a test with karmic-karmic. After upgrading i'll test again with lucid-lucid.
The problem seems to have something to do with the cpu as written before.

Is there noone who could help with this?

Revision history for this message
am (juniper1982) wrote :

I don't think this is an upstream problem. I have used gnome 2.20 - 2.26 on gentoo linux and didn't see this problem ever.

something about ubuntu?

Revision history for this message
Greg Oliver (oliver-greg) wrote : Re: [Bug 250517] Re: gvfs performs slowly on bulk sftp transfers

On Tue, May 18, 2010 at 4:09 PM, am <email address hidden> wrote:
> I don't think this is an upstream problem.  I have used gnome 2.20 -
> 2.26 on gentoo linux and didn't see this problem ever.
>
> something about ubuntu?
>
> --
> gvfs performs slowly on bulk sftp transfers
> https://bugs.launchpad.net/bugs/250517
> You received this bug notification because you are a direct subscriber
> of the bug.
>
> Status in The "G" Library - GLib: Confirmed
> Status in “glib2.0” package in Ubuntu: Triaged
>
> Bug description:
> Tested on a low-latency 100Mb fibre link. scp/sftp console clients are able to hit the line speed of around 10MB/s. Copying via Nautilus is almost 10 times slower, at about 1.6MB/s.
> This seems to be because it does not batch up writes, but waits for a reply for each.

Hmmm, I completely forgot I subscribed to this.. I'm on a F12 box
here at work with GB ethernet and 10GB servers on a 10GB switch. I
can only get ~470mbits copying from ram<-->network<-->ram with
nautilus.. scp nets me ~882mbits..

I can safely say it is not a ubuntu issue for me...

-Greg

Revision history for this message
John McPherson (jrm+launchpadbugs) wrote :

Reading the upstream bug ( https://bugzilla.gnome.org/show_bug.cgi?id=523015 ) suggests that the slowness is because when nautilus writes a file via gvfs, it writes a buffer, then waits for gvfs to confirm the write, then sends more data, waits for a reply, etc.

It seems like it is a bit of a design flaw, but it might be hard to fix (see comment 2) since both nautilus and the sftp backend are using the API as designed.

It doesn't look like anyone is working on it. We try to mitigate this by not using nautilus for copies where possible.

Changed in glib:
importance: Unknown → Medium
Revision history for this message
周成瑞 (e93b5ae3) wrote :

Using 11.10, 100Mbps home LAN. Using rsync the speed is ~11MiB/s, nautilus 6~7MiB/s. Google search leads me here. Any improvement since reported?

Revision history for this message
Anton (feenstra) wrote :

Still here on Oneiric, unfortunately.
This bug means for me that a remote backup over gvfs (using Simple Backup) takes hours, where it could be done in less than half an hour.
It also means, I cannot play back video from my moviebox over wireless - the higher latency drops throughput below what is needed for continuous playback.
I cannot believe that this issue has been around since Intrepid and not being fixed!

Revision history for this message
David Ayers (ayers) wrote :

I really doubt that this will be fixed in Ubuntu (only). You will probably have to talk to the upstream developers:
https://bugzilla.gnome.org/browse.cgi?product=gvfs
and/or find someone else to provide a path to them.

Revision history for this message
martinr (martinr1111) wrote :

I also ran into the same problem on LUbuntu 10.04, which increases big file
transfers times with hours.

Could it be that sftp from Nautilus selects a different cipher for its encryption?
Presumably des (the default protocol 1 cipher), judging from the speed drop
compared to plain sftp?

For full details see this posting:
http://ubuntuforums.org/showthread.php?p=12275118#post12275118

Revision history for this message
martinr (martinr1111) wrote :

It seems to be related to this upstream bug report at Gnome:
https://bugzilla.gnome.org/show_bug.cgi?id=532951
(slow download using sftp://)

Peter Meiser (meiser79)
affects: glib2.0 (Ubuntu) → gvfs (Ubuntu)
Changed in gvfs:
importance: Unknown → Low
status: Unknown → New
Revision history for this message
Anton (feenstra) wrote :

The problem still exists in precise (12.04), that is more than 5 years after it was first reported both here (ubuntu/gvfs, but also in nautilus), and at gnome/gvfs. How come no progress *at all* has been made here? Is there any channel that we could use to draw attention to this serious performance issue?

Or, alternatively, any pointers to how to hot-wire some workaround in the gvfs code that may fix this?

Revision history for this message
Mykoxy (mykoxy) wrote :

Yes, the problem still exists and has some very weird properties.

Using the sftp cli I get speeds up to 800kb/s which is the max due to overhead and others services that eat up upload speed.

Using nautilus gvfs, I get under the same conditions 250 - 350 kb/s.

The strange thing is thoug, that when I copy another file simultaniously it reaches the same speed as the first one, even a bit faster. The maximum is reached with 3 or 4 parallel downloads, depending on the load of the uplink.

Very strange indeed...

Changed in gvfs:
status: New → In Progress
Changed in gvfs (Ubuntu):
assignee: Ubuntu Desktop Bugs (desktop-bugs) → nobody
Revision history for this message
Ville Ranki (ville-ranki) wrote :

Ross Lagerwall has made a patch which should help in this issue. I hope it will be included in the Ubuntu gvfs packages asap.

Revision history for this message
Jamin W. Collins (jcollins) wrote :

PPA builds with Ross Lagerwall's patches can be found here:
https://launchpad.net/~jcollins/+archive/gvfs

Revision history for this message
Ville Ranki (ville-ranki) wrote :

Thanks. The packages seems to work, but i haven't done any actual measurements (yet) if the patch has any effect.

Changed in glib:
status: Confirmed → Fix Released
Changed in gvfs:
status: In Progress → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.