/usr/bin/X leaks memory with firefox

Bug #111299 reported by cybotoy11 on 2007-04-30
Affects Status Importance Assigned to Milestone
Mozilla Firefox
Fix Released
firefox (Ubuntu)
xulrunner-1.9 (Ubuntu)

Bug Description

my configuration:
xubuntu 7.04
google toolbar & some other plugins installed

steps to reproduce the bug:
1. run firefox and open http://maps.google.com/
2. move the map, during this process memory is fullfilling
3. close firefox, some memory become free & available, but some is stolen by /usr/bin/X process

repeat these 3 steps several times again and again.
memory leak takes more & more amount of memory.

Confirming... The thing is, the x11 pixmap is the _only_ place we're storing our
image data. I seem to recall some discussion about catching BadAlloc and
dragging all the image data back over to the client at that point before
retrying, but I can't recall what bug that was in....

bug 210931 is about trapping errors, but i don't think it was planning on doing this (sounds like a good idea, but first we have to deal w/ certain other problems...)

I have a same bug while using firefox on a X-Terminal with 64 MB of RAM under Ubuntu Dapper. When I load several pictures, the X-Terminal is getting slower and slower until freezing. The mouce does not answer as the keyboard (except SysRQ to reboot).

While attempting to reproduce the bug with gnome-system-monitor opened, the amount of X memory used by firefox is increasing each time a picture is loaded until it reach 60.5 MB, amount of memory used which makes the X-Terminal freezing.

You can reproduce this bug by loading a weather website @ http://www.meteociel.fr/modeles/gfs/vent-moyen/3h.htm and rolling the mouse over the left side of weather pictures under the word "Échéance" (a javascript script loads a picture each time you roll a link, and eats about 3 MB of X memory on our X-Terminal)

I tried to reproduce this bug under Epiphany ("epiphany-browser" debian package), which is based on firefox >=1.5 (according to "aptitude show" command) and the result is exactly the same : freeze.

bug 259672 seems related with this one, firefox seems to abuse the X memory...

this bug OS probably applies to all OS with X11, not just solaris... as there isnt any "any OS with X11", i would suggest changing it to all, at least isnt excluding all other X11 enabled OS

cybotoy11 (cybotoy11) wrote :

my configuration:
xubuntu 7.04
google toolbar & some other plugins installed

steps to reproduce the bug:
1. run firefox and open http://maps.google.com/
2. move the map, during this process memory is fullfilling
3. close firefox, some memory become free & available, but some is stolen by /usr/bin/X process

repeat these 3 steps several times again and again.
memory leak takes more & more amount of memory.

trollord (trollenlord) wrote :

Yup. It's a known bug that has been around since Firefox 0.3 or so, and can often lead into X process being huge (like 1-2 gigabytes etc) forcing you to restart X. It is caused by some technical solution that makes X to reserve some bitmap caches for Firefox, and the Firefox developers have refused to acknowledge the problem because fixing it would make Firefox look bad. It would mean that Firefox would actually show as what it is - the worst browser to what comes to memory requirements. In theory those bitmap caches should get always freed nicely, but especially when you put more stress on Firefox by for instance using larger amount of simultaneous tabs (200+) it starts leaking very hard and very fast.

trollord (trollenlord) wrote :

This is likely caused by https://bugzilla.mozilla.org/show_bug.cgi?id=296818 . To quote: "The thing is, the x11 pixmap is the _only_ place we're storing
our image data.". Apparently, there are many circumstances where Firefox does not request for that to be freed. (Ie. when crashing, but also potentially in other circumstances as well.)

I can definitely confirm this having existed since Firefox 0.3, on multiple distributions, multiple X versions (Xfree86/Xorg, along the years), multiple arches, and also on FreeBSD. I have restarted my X in average once per day because of this bug during these years.

trollord (trollenlord) wrote :

This also affects epiphany, it's gecko problem.

Changed in epiphany:
status: New → Confirmed

Created attachment 279598

I'm working on a patch to fix this. You can see the initial version of the patch here:

The patch makes Firefox release the pixmaps used for JPEG images after a few seconds. It seems to work pretty well on my machine; my next step is to add support for this to the PNG and GIF decoders, and to take better measurements of how much memory we can save.

Frederico, could you post your latest patch from http://primates.ximian.com/~federico/news-2007-09.html#06 here?

Created attachment 280070

Updated patch.

The PNG loader works fine with this one. I just discovered some problems with the JPEG loader; JPEGs "disappear" when they get loaded in very small chunks. I think I know what's going on; hope to have a fix today.

Created attachment 280146

With this patch, both the JPEG and PNG decoders seem to work perfectly. I'd call this ready for inclusion, and leave GIF as an exercise for the reader :)

BTW, I'm pretty sure that this bug is present on all platforms, not just X11. Could someone with proper access please change the OS to "all"?

this stuff should probably have gone in to a new bug, but since we're here...

trollord (trollenlord) wrote :

mentioned patch set as a side effect corrects this behaviour. I had a discussion with the author and it should be guaranteed at last to free the pixmap in all scenarios. I heard also that the patches have got really good chances of making it upstream for the next Firefox release.

Hello, I had recently filed bug 395260 to find a way to disable pixmap caching to X. See below:


There was a post the above bug to reference this bug. I see the link to the diff and have downloaded source from mozilla. I would like to test the diff to see if this fixes the pixmap issue or not, but do not know how to go about this. I have been trying to use patch but get many errors. If anyone could post some quick steps to do so I would love to do some testing and report back.

The pixmap problem makes firefox on thin clients very unstable (and firefox causes the clients to freeze hard after sucking up all available memory).

Jim Kronebusch
<email address hidden>

Can someone with sufficient permissions please mark bug #395260 as a duplicate of this one?

(In reply to comment #12)

> There was a post the above bug to reference this bug. I see the link to the
> diff and have downloaded source from mozilla. I would like to test the diff to
> see if this fixes the pixmap issue or not, but do not know how to go about
> this.

Download the Firefox sources here:

Download the patch from attachment #280146

Unpack the source; cd into its toplevel directory.

Run this command:

  patch -p1 < patchfile

(where "patchfile" is the patch you downloaded).

Now compile Firefox as usual.

*** Bug 395260 has been marked as a duplicate of this bug. ***

Well I successfully built the granparadiso package with the suggested patch applied. Performance improved but pixmap cache was still allowed to spike and cause thin clients to freeze, but less often. Gavin McCullagh then suggested trying to tweak this section of the patch to decrease the amount of time the pixmaps are cached even further:

+static int
+get_discard_timer_ms (void)
+ /* FIXME: don't hardcode this */
+ return 5000; /* 5 seconds */

I went to the extreme side and reduced this to .2 seconds (return 200). Now every site that would have crashed the client in the past...did not. Cool. However performance of said sites went to horrible. But extremely slow loading of the offending sites is way better than hard freezes. Possible increase to 1 second cache may help. But the other 95% of average browsing seems to take no performance it at all and work quite nicely.

However non-gecko based browsers such as Opera and Konqueror do not seem to cache pixmaps in this fashion at all, and are still able to load the offending sites with no performance hit at all. I have no idea how this is done, but if Firefox could incorporate such methods this would be a much better fix.

Are the portions of the patch that reduce the amount of time pixmaps remain in system memory able to be patched into current releases such as It looks like it will be some time until FF 3 is released and the ability to build a current release with this portion of the patch could solve a lot of headaches in the thin client world until FF 3 is released. If this is possible please let me know.

Federico, thanks for your work on this.

For others following this thread here is the method I was able to use on Ubuntu with success to build the patch into Firefox (from Gavin McCullagh):

1. Add gutsy source line to /etc/apt/sources.list if they do not already exist
     deb-src http://ie.archive.ubuntu.com/ubuntu/ gutsy main restricted universe multiverse
2. sudo apt-get build-dep firefox-granparadiso
3. apt-get source firefox-granparadiso
4. cd firefox-granparadiso-3.0~alpha7
a. cd debian/patches/
b. wget http://primates.ximian.com/~federico/misc/moz-images-20070907.diff
c. edit moz-images-20070907.diff if desired to increase/decrease discard timer
d. echo "moz-images-20070907.diff" >> series
e. cd ../../
5. debuild -uc -us
6. Remove the source


Not sure what the heck happened there :) If anyone can remove my double post feel free.

I was just thinking (which isn't always a good thing). Would it be possible to modify this portion of the patch:

+static int
+get_discard_timer_ms (void)
+ /* FIXME: don't hardcode this */
+ return 5000; /* 5 seconds */

and add some intelligence to it? Could this involve some sort of formula that looked at what percentage of RAM is available and set the time accordingly? Say I have a system with 128MB RAM total, and 15% of it is in use, then the formula would set the discard time to 20 seconds. Then on the next request it would look at the percentage used again and see that 50% was in use, now it sets the discard time to 5 seconds. Next request RAM is almost full at 80%, discard set to .1 seconds. Does that make any sense? This would still involve some hard coded thresholds but it would add a little intelligence to the discard time. This could help increase performance when RAM is available, but stay safe when it is used up.

But this would also require a check on every request for pixmap storage which may add overhead. Also I don't know if the discard timer is set on every request or if it is only set on startup of Firefox. And I don't know if the discard time is set per individual storage request or if it affects all items currently in storage.

So feel free to tell me I'm nuts. It was just a thought :-)


So won't this approach mean that any repaint after the image has timed out has to redecode the image? We should be pretty careful in tuning the timeout such that reading some text and then paging down doesn't mean that you have to redecode the images on the page (e.g. the backkground) on every pagedown...

I've tried this patch and for general browsing it works fantastically, however its seem to either crash or grind firefox to a halt if i try to open huge images, an example being nasa's blue marble:


others here: http://earthobservatory.nasa.gov/Newsroom/BlueMarble/

it seems the timeout is shorter than it take for the image to load is it end up fighting against itself, it will eventually load if i download the image to disk and open it but it's still slow

(In reply to comment #20)
> So won't this approach mean that any repaint after the image has timed out has
> to redecode the image?

I think this is preferable to Firefox filling up all available physical memory
- swapping to disk hurts performance of the whole operating system more, than
having to redecode images (although I won't say it can't be annoying).

(In reply to comment #19)
> Could this involve some sort of formula that
> looked at what percentage of RAM is available and set the time accordingly?

There are cache memory limits (that are derived from total RAM available) that
should be used in such calculations, not really the physical ram size. See Bug
(for those interested - theres also a testcase that makes FF's memory
usage jump really high almost instantly).

> I think this is preferable to Firefox filling up all available physical memory

Sure. All I'm saying is that it might be a good idea for the timeout to be on the order of minutes rather than seconds. That would still prevent filling up physical memory and evict images in truly inactive tabs, while not causing issues with common user activity.

Ideally, the timeout would be based on some evaluation of the speed of the processor and amount of available RAM, but that sounds hard and complicated.

If we really wanted to overengineer this, we would time how long the first decode takes (just the decode, so time spent while calling libpng/jpeg/etc), and then use that as input in when to evict that image, along with the amount of memory taken up by the decoded image. But that's probably a future addon patch.

Is it possible to make the use of a the discard timer a user settable option in
about:config? Maybe an option to enable a discard timer at all such as
browser.enable.image_discard true, and an option such as
browser.image_discard.time 5000 (in milliseconds)? This wouldn't be something
that necessarily everyone wants, so the ability to turn on/off wouldn't be a
bad thing and since most users are on high memory desktops this could even
default to off.

I still like the idea of if a discard timer is enabled to set the time based on
current memory usage.

Also it is probably worth looking into how Opera and Konqueror handle this as they do not seem to have the same problems.


Boris, this problem actually allows memory to be filled up within seconds! Install an application similar to xrestop and run it, then hit this website and watch how quickly RAM usage for pixmap storage is allowed to climb:


This is unacceptable and will crash a lower memory client in seconds (Linux thin clients are very unstable due to this). It should also be noted this is not just Firefox's fault. Although Firefox is the offending application in this case, the Xserver should be better equipped to not allow RAM usage to get to the point where the Xserver crashes. Firefox is also not the only offender in this area, other apps such as OpenOffice can be a problem as well.

To help deal with this problem farther up the line I have started a thread on the xorg mailing list:


If there are any developers for Firefox who would like to help find a way to get the Xclients and Xserver to better communicate usage and set limits, feel free to join the discussion.


> then hit this website and watch how quickly RAM usage for pixmap storage is
> allowed to climb:

I don't see how timing out images within seconds will help that problem. A solid-color PNG image size scales basically as the log of the decoded image size. So a fairly small actual image can easily fill up whatever size memory you happen to have once decoded.

Frankly, I don't think bug 395260 is a duplicate of this bug. What's needed to fix this bug, in my opinion, is to evict "unused" decoded images, for some definition of "unused". A good one might be an image that has not had its actual image data accessed (via the DOM or by being painted or whatever) for some period of time. The "unused" qualifier is almost certainly needed to prevent unacceptable impact on DHTML, pageload, and scrolling performance.

What's needed to fix bug 395260, again in my opinion, is a hard cap on the amount of pixmap storage we'll allocate. That should be a separate configuration option from the fix for this bug, and someone running Gecko in an environment where server memory is constrained could use that option. In such an environment, as you have said, a severe performance hit, or even not seeing the image at all, is much more acceptable than crashing the X server.

I do think that we should consider defaulting this hard cap to some fraction of the total physical memory in the machine. Not rendering images at all seems preferable to having to swap to paint them. But again, I think that would best be handled in bug 395260.

Boris, if there is a way to do what you suggest above that would be great. Federico's patch to discard images after a specified amount of time does increase stability on the thin clients, but it still does not prevent abuse of pixmap storage that can possibly crash the Xserver. A cap on the maximum amount able to be used would be a better fix. But if Federico's patch could be backported to a current release of Firefox we would at least have a solution for the interim.

I think the bug I started of 395260 is definitely different that the reason this bug was originally started. But a resolution to this bug could also help bug 395260. I would however still love to see a solution that could provide a user settable option in about:config to limit allocation for pixmap storage.

I am not sure who the actual developers are of Firefox and if they know about this problem. If anyone knows please help make sure they are aware of this.


Jim, Boris happens to be a big one ;)

(these patches should have gone in to a new bug not this one, but here we are.)

Capping the amount of pixmaps allocated is trivial (assuming we do proper accounting for our pixmaps and don't care about other pixmaps that other apps may have created). That should be separate from this bug. If someone cares feel free to reopen that bug and I'll be happy to explain to someone what needs to be done in that code.

This bug is about discarding uncompressed image data. For doing that it doesn't matter how that data is stored be it in a system object or on the heap. Please separate out the issues. Pixmaps are X11 specific and this is a very cross-platform bug.

As for how long we should wait before discarding data I think that is up in the air. I tend to agree with Boris that we probably want to wait about a minute to purge things not in use. This should have the least impact on performance.

There are many additional things that could be done once this patch is finished and landed, but they really aren't directly related to this bug and should probably live elsewhere.

Created attachment 284696
updated to trunk + cleanup

this updates things to the trunk and cleans up several things.

We'll want to do several follow-on patches including adding support for this to the GIF decoder. We'll probably want to tweak the timeout (currently set to 45 seconds), possibly making it dynamic or make us hold on to images on the current page longer. There are a lot of options.

Created attachment 284708
better fix

This removes the temp buffer accrual in the JPEG decoder and makes us create the image container object earlier on.

SNIP from comment #31
We'll probably want to tweak the timeout (currently set to 45

Is it possible that any future patches that incorporate a discard timer also allow the time to be set by the user in about:config? Then the time can be a general number that should be good for most circumstances, but allow tweaking by users for a particular need.


Comment on attachment 284708
better fix

Looks fine, only issue I have is that AddToRestoreData is using realloc() for every chunk of data that comes in, with just the length of the new piece added -- so we'll end up realloc'ing every time we get data from the network, which sucks a little. It would be better to keep increasing this buffer -- start it at 4k and keep doubling up to 128k or something, and then increase by 128k at a time... when you're done reading you can always realloc it back down if there's too much slop.

Created attachment 285005
use nsTArray for handling the buffer

this will fix the reallocs by using nsTArray to manage the buffer and then compacting it at the end.

I just filed several followup bugs for this one including adding GIF support, making the timeout a pref and adding xlib logging. If there are other followup issue you would like to see I suggest filing additional bugs and we'll go from there.

checked in -- marking this FIXED.

regression from 20071015_1517_firefox-3.0a9pre.en-US.win32.zip

check-in for this or bug 399863 cause crash.
(URL is wrong, correct URL is http://blog.livedoor.jp/ayase_shiori/)

Created attachment 285026
fix for png crash

this moves the init of the image container for the png decoder to init and lazily inits it -- this code is basically the same as that in the jpeg decoder.

fixed, thanks.

We should get a test at least for the crash from comment 38 and for any other stuff mentioned earlier in the bug (too lazy to check now to see if anything else got mentioned).

I've backed this out in the ongoing investigation into the current Tp regression

This appears to have re-landed so I'm not sure why it isn't fixed. And it also appears to have caused around a 1% Tp regression on the Linux box and some on one of the talos boxes. I recall the suggestion that this patch was worth it but don't see any confirmation of this anywhere.

This has caused bug 400403

This is certainly not really proven technology...

As far as I understand, this patch makes it that the 'undecoded' data as read from the stream, is also stored in memory. So, this will initially use more memory, until the timeouts kick in and start destroying the 'decoded' data (for JPG/PNG's, this is the image data itself).

I think that this storage of 'undecoded' data needs to coordinated with the normal caching (disk/memory), as it could better to just reload from the diskcache, and not allocate additional internal memory.

it certainly could be better, maybe, to read from the diskcache -- it could also be significantly slower. Worth trying though in a separate bug.

Sebastien Bacher (seb128) wrote :

that's not an epiphany issue

Changed in epiphany-browser:
status: Confirmed → Invalid

Did this help memory use according to task manager or similar?


I see a definitive reduction in both memory usage and the number of GDI objects, 45 seconds after the page is loaded. And when you force the images back in (not refreshing, but redisplaying after they're scrolled away, and were then purged by the timer), both numbers go back up to their top, and will then go back down (45 sec later) to their minimum again.

There is some definite memory reduction, but since most of the images on the web (especially forums) are GIFs, bug 399925 needs to be fixed for an even greater memory use reduction.

Bug 51028 has some numbers after running the stress test there; on OS X my VM size dropped by about 400MB.

Has anyone been able to successfully apply a version of this patch to a current flavor of the FF 2.x codebase? The current patch and all earlier versions appear to be only for the latest trunk or an earlier version of FF3-alpha, which doesn't do much for the rest of the world that still needs to run FF2 in a production environment.

The only things landing on the 2.0 branch are stability, security or regression fixes. This patch falls way outside what would be accepted for the branch.

Well, although invasive, I'd classify this as a stability fix myself.

Regardless, even if not acceptable in mainline, a patch that could be applied to 2.x would be extremely useful. I think the problem was that it relied on some infrastructure that didn't exist in ff2.

In general, the image decoders have changed a lot between Fx2 and Fx3 due to the change to the Cairo rendering backend. I'm willing to bet that the patch in this bug wouldn't even have a prayer of working in the non-Cairo world. The rendering backends at this point in time are essentially apples and oranges.

> Well, although invasive, I'd classify this as a stability fix myself.

"stability" means crashes and hangs. Particularly large memory leaks _may_ qualify. This but just doesn't fit those criteria.

Note that this improves the testcase from bug 213391 quite a bit (if you wait 45s for the caches to clear). Now that we have this is it easy to detect if we are blowing through our in-mem cache and only keep images that are being painted in ram?

> "stability" means crashes and hangs. Particularly large memory leaks _may_
> qualify. This but just doesn't fit those criteria.

Currently, firefox crashes on me every 4 or 5 days under heavy usage (generally 4+ windows showing a rotating set of 50+ tabs, which I admit is not typical of 'normal' usage). Generally, all the firefox windows disappear, and then firefox-bin pegs the cpu for hours until I kill -9 the process (which might be a (seperate) bug). I have experimented with clean profiles, removing the flash plugin, to no avail.

I've been running a cvs build of minefield/3.0a9pre for just under two weeks, and haven't had a single crash yet, and xrestop is showing mem usages that are 20-30 times smaller (16-20mb instead of half a gig of pxm mem).

This is why I'd classify it as a stability fix: it causes crashes. That said, I know it's not an issue for most people under normal use (although I'm not the only person who browses like this)

For what it's worth, I usually have 30+ windows open, with 10-20 tabs in each. And I don't generally run into the sort of memory or crash issues you're describing.

Perhaps it's worth figuring out what exact sites are involved and seeing whether there's something going on other than the image memory usage?

I'll try to reproduce it again (although I've grown somewhat attached to 3.0a9pre). Suggestions as to what logs I should be keeping?

Some idea of what sites in particular cause the memory usage, and better yet the crashes...

If you end up with the browser in a loop, kill it with -SEGV, not -KILL. That should generate a talkback report that might help understand that issue.

What originally put this issue on my radar was a site that included a 2MB JPEG image (over 41 megapixel, poster size) that had been scaled down to thumbnail size. The pixmap decompression to display caused the resources for this image to balloon up to 160MB of ram. Couple this with the fact that pixmap resources were not being freed, and you can calculate on one hand how many times this page could be reloaded before it was blowing out all available memory, including swap.

Unfortunately, I can't provide a link because it came from an inventory page which required an account login.

*** Bug 327280 has been marked as a duplicate of this bug. ***

Changed in firefox:
status: Unknown → Fix Released

This patch causes crash. Please see Bug 441563.
Sorry, I cannot modify dependency list.

This patch changed imgIContainer without changing the iid. See bug 475897.

Bug 399926 is not depending on this one.

Micah Gersten (micahg) wrote :

This was apparently fixed in xulrunner-1.9

affects: epiphany-browser (Ubuntu) → xulrunner-1.9 (Ubuntu)
Changed in xulrunner-1.9 (Ubuntu):
status: Invalid → Fix Released
Micah Gersten (micahg) wrote :

This won't be fixed in Firefox 2 as it's EOL.

Changed in firefox (Ubuntu):
status: Confirmed → Won't Fix
Changed in firefox:
importance: Unknown → Medium
To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.