Heavy memory leak in gvfsd-http

Bug #225615 reported by Mika Fischer on 2008-05-02
148
This bug affects 21 people
Affects Status Importance Assigned to Milestone
gvfs
Fix Released
Medium
gvfs (Ubuntu)
Medium
Christian Kellner
Nominated for Hardy by nikji
Nominated for Intrepid by Ciprian Ocean
Nominated for Jaunty by Ciprian Ocean

Bug Description

Binary package hint: gvfs

gvfsd-http in hardy has a very big memory leak. The attached screenshot was taken after downloading > 1GB of files by dragging links from Firefox into a Nautilus window.

Related branches

Mika Fischer (zoop) wrote :
Sebastien Bacher (seb128) wrote :

Thanks for your bug report. Could you try to get a valgrind log for the crash (you can follow the instructions on https://wiki.ubuntu.com/Valgrind)?

Changed in gvfs:
assignee: nobody → desktop-bugs
importance: Undecided → Medium
status: New → Incomplete
Sebastien Bacher (seb128) wrote :

Christian, is that a design issue which makes the file being stored entirely in memory when copied?

Mika Fischer (zoop) wrote :

I should probably make it clearer that the screenshot was taken *after* all the downloads were completed!

But even if gvfsd-http would use that much memory only while downloading it would be a major bug! I mean come on, holding a multiple GB file completely in memory while downloading? There has to be a better way!

I'm lucky because I have 4GB of RAM, most people wouldn't be so lucky. Their systems would slow down to a crawl because of all the swapping.

I don't know how to invoke the gvfsd-http backend manually under valgrind. And the output would probably not be very useful without debug symbols anyway...
If you tell me specifically how to start gvfsd and how to stop it correctly (because if I just kill it the valgrind output would also not be very useful), I'd be happy to provide valgrind output.

But this issue should be really easy to reproduce: Just find something to download that's larger than 20 MB, then drag the link to your desktop and watch gvfsd-http eat your memory and stay that way even after the download is finished.

Christian Kellner (gicmo) wrote :

I'll have a look..

Changed in gvfs:
assignee: desktop-bugs → gicmo
Michael R. Head (burner) wrote :

Confirmed in gvfsd-dav, too.

Reproducable using gnome-user-share on machine 1, and connecting using network:// on machine 2.
Transfer any file from Public on machine 1
Fire up top and watch the memory blow up.

You'll also find that when the total size of the files copied exceeds free memory, that the copy process fails and the dav mount must be unmounted before it can be used again.

I've been hunting through gvfsbackendhttp.c and soup-input-stream.c (and libsoup itself), but I haven't found the leak yet. Presumably there's some buffer that's not being freed somewhere after the data is pulled out...

Guess I'll mark as confirmed.

Changed in gvfs:
status: Incomplete → Confirmed
Michael R. Head (burner) wrote :

Also, how does one launch a gvfs backend within valgrind to test it?

Austin Lund (austin-lund) wrote :

Attached is a more useful valgrind log.

I don't have all the symbols for libc, but I have the dbgsym packages installed so I don't know why.

pparkkin (pparkkin) wrote :

I am running into this when downloading podcasts with rhythmbox.

Just thought I'd let you guys know what all this affects, make it seem more important to fix.

Thanks!

dagr (dag-ringdal) wrote :

I had a similar experience with this gvfsd-ftp. I had mounted a shar on my machine, and did some resync. Suddenly 50 av all cpu power was soaked into this process. When I killed the process, cpu usage went down to normal.

dagr

Ben J Woodcroft (donttrustben) wrote :

I can confirm pparkkin's comment about downloading podcasts with rhythmbox.

Gudularite (antoine-mouton) wrote :

I ve the problem too with rhytmbox

Austin Lund (austin-lund) wrote :

I'm pretty sure this trace is the core of the problem:

==8552== 21,765,138 bytes in 30,061 blocks are still reachable in loss record 47 of 47
==8552== at 0x4C22FAB: malloc (vg_replace_malloc.c:207)
==8552== by 0x63CDC8B: g_malloc (gmem.c:131)
==8552== by 0x63E1737: g_slice_alloc (gslice.c:824)
==8552== by 0x63E3052: g_slist_append (gslist.c:117)
==8552== by 0x58B3199: append_buffer (soup-message-body.c:389)
==8552== by 0x58B4C43: read_body_chunk (soup-message-io.c:308)
==8552== by 0x58B4FC2: io_read (soup-message-io.c:754)
==8552== by 0x58B5906: io_unpause_internal (soup-message-io.c:955)
==8552== by 0x63C6261: g_main_context_dispatch (gmain.c:2009)
==8552== by 0x63C9515: g_main_context_iterate (gmain.c:2642)
==8552== by 0x63C97D6: g_main_loop_run (gmain.c:2850)
==8552== by 0x40B73F: daemon_main (daemon-main.c:270)
==8552== by 0x40B9A0: main (daemon-main-generic.c:39)

It seems to want to read the entire file into a buffer and not flush it out to the disk at regular intervals. However, I have no experience with SOUP and I am finding the code hard to understand.

Martin Olsson (mnemo) wrote :

Upstream has suggested but not yet commited these changes (attached). I've tried these and on my machine the big leak is gone.

James Lewis (james-fsck) wrote :

I am seeing this problem also, and I can't keep rhythmbox running for more than a few minutes... as it has a queue of podcasts waiting to download and it just reports segmentation fault with no other error, I have an strace of that.

I don't know if these issues are related, but it seems that a lot of people are reporting issues with this, associated with rhythmbox and podcast downloads... mabe this will help.

Martin Olsson (mnemo) wrote :

If you have a segv that's unrelated to this bug. However, there was recently another bug in RB where it would crash when it started to download the second file every time you selected more than one file and then right clicked and selected download. That bug has also been fixed upstream.

See this bug report:
http://bugzilla.gnome.org/show_bug.cgi?id=554556

Wouter Bolsterlee (uws) wrote :

Upstream GVFS 1.1.2 has this in the NEWS file:
> * http: Fix major memory leak

See http://svn.gnome.org/viewvc/gvfs/trunk/NEWS?revision=2136&view=markup

Changed in gvfs:
status: Unknown → Fix Released
Pedro Villavicencio (pedro) wrote :

fixed upstream ,thanks for reporting.

Changed in gvfs:
status: Confirmed → Fix Committed
Launchpad Janitor (janitor) wrote :

This bug was fixed in the package gvfs - 1.1.3-0ubuntu1

---------------
gvfs (1.1.3-0ubuntu1) jaunty; urgency=low

  * New upstream version:
    - ftp: fix limited number of connections causes commands to fail
    - trash: fix parallel build doesn't work
    - trash: add trash::orig-path and trash::deletion-date info
    - trash: set files to mode 700 before deleting to deal with users trashing
      read-only directories
    - smb-browse: browsing authentication support (lp: #193232, #207072)
    - smb-browse: make backend not automounted anymore
    - New trash backend (lp: #7560, #187565, #201393, #206747, #207835, #216739)
    - Use the new shadow mount facility in gio
    - gphoto2: Use shadow mounts
    - obex: Fix icon for root directory
    - http: Fix major memory leak (lp: #225615)
    - http: Support proxies
  * debian/patches/01_maintainer_mode.patch,
    debian/patches/90_relibtoolize.patch:
    - commented debian change for now since it's not really required and create
      build issue using the jaunty libtool version
  * debian/patches/90_correct_glib_use.patch:
    - the issue is fixed in the new version

 -- Sebastien Bacher <email address hidden> Wed, 07 Jan 2009 22:52:11 +0100

Changed in gvfs:
status: Fix Committed → Fix Released
Dylan McCall (dylanmccall) wrote :

Could this fix be sent to Intrepid? I just observed the issue here, with gvfsd-http consuming 100 MB of memory 12 hours after copying a 60ish MB file over HTTP. Not expected behaviour, but responsible for quite a loss.

Sebastien Bacher (seb128) wrote :

the change could be backported if somebody has interest working on it, the ubuntu desktop team has limited ressources though and the focus is on jaunty now and not on intrepid which is not a lts and should get upgraded to jaunty soon by most users

Bob Wya (robert-mt-walker) wrote :

I am using Intrepid Ibex (well actually Linux-Mint 6) AMD64. I downloaded a few small **audio** podcasts with RB and now I am stuck at 3.9Gb RAM usage (out of 4Gb) - gvfs using some 2.5Gb!

I can't install Jaunty on this rig because the older X1950 Pro (512) ATI drivers are incompatible with the newer X-Server version in use in Ubuntu 9.04. The newer ATI drivers (Catalyst 9.xx series) which are compatible with Jaunty are incompatible with my card (due to ATI dropping driver support for R500 series and previous).

I couldn't get Ubuntu 8.04 LTS or Mint 5 to install. The installer just drops out to the Busybox. So that is not an option either...

So I like many others use Intrepid and suffer a memory leak the size of the USA!! Not too impressed 'bout this... This is after all a major bug - not a little glitch... Since Jaunty is effectively incompatible with ATI cards R500 (& previous) there will be a lot of folks stuck on 8.10 without a paddle...

Bob

nikji (mind-control) wrote :

Sorry about this: " Nominated for Hardy by nikji". My fault :(

In fact I'm using Jaunty Jackalope and having 3Gb of RAM using Virtual Box i occupy around 2Gb and the other 1 is for gvfsd-http .

Hope new Ubuntu 9.10 will have this fixed :)

Alec Wright (alecjw) wrote :

I can confirm that this issue exists in ubuntu 10.04. gvfsd was using 6GB of RAM

Alec Wright (alecjw) wrote :

(I'm not sure when the problem started for me, but it could have been because i mounted an SSH share. I started transfering an 88MB file before cancelling, then noticed this issue several hours later)

I am having this problem in Linux Mint 9 x64 (Ubuntu 10.04) when backing up using Deja-Dup over WebDAV to online storage. gvfs-dav is using 2-3 gb ram after a while and doesn't release it until back-up is complete or interrupted.

Jody Richardson (wcc-student) wrote :

I have this same issue on Ubuntu 10.04 "Lucid Lynx" X64 edition when copying files across the network from one machine to the other. Both use the same OS. The only thing I see which may affect the whole situation is the drives containing the files to copy and store the information are NTFS instead of EXT4.

Changed in gvfs:
importance: Unknown → Medium
Christopher (captain-c) wrote :

I am still having this issue in 10.10 x64 using gvfsd-dav to tranfer files to webdav locations.

Dan Jared (danjaredg) wrote :

regresion in ubuntu 12.10

Dragoneyes (stiu) wrote :

Confirming regression in Ubuntu 12.10 64bit. gvfsd-http is using more than 1Gb of memory.

Hello Customer

A wide selection of brands is available to choose from.
The perfect place to buy watches as we offer best quality, excellent service, money back guarantee in case you are not satisfied and have many other strong points, such as fast delivery, helpful and caring customer service.

******************************************************
Today I received my two watches. Both watches were spectacular, you guys did a wonderful job and I will definitely recommend you to all my friends!
Thank you!
                     Will Rangel
******************************************************

Click here ---> http://penio.ru

Si Dedman (si-dedman) wrote :

having this problem in xubuntu 13.04, rhythmbox with loads of plugins, also using google music which is uploading my library to the cloud. memory use grew to 1132mb before I killed it; rhythmbox had already been killed. Please let me know if any log required & how to do so if not mentioned above. Thanks.

vkapas (vkapas) wrote :

Samo problem in Ubuntu 10.04.3 with gvfs 1.6.1-0ubuntu1build1 during backup process by Deja Dup.

gvfs-dav takes up to 100% RAM (4Gb) and up to 100% SWAP (also 4Gb) if the size of the all files being copied more than ~ 4-5Gb.

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.