Ubuntu

Firefox caches pixmaps to X11, need feature to disable

Reported by Jim Kronebusch on 2007-09-06
8
Affects Status Importance Assigned to Milestone
Mozilla Firefox
Fix Released
Medium
firefox (Ubuntu)
High
Mozilla Bugs

Bug Description

Binary package hint: firefox

Firefox caches pixmaps to X11. On a full workstation with plenty of RAM and tons of hard drive space, this probably wouldn't be seen as a bug. But when running thin clients or any type of diskless workstation this is a HUGE problem. For an example of what happens and to provide repeatability for testing here is some information.

Install Edubuntu 7.04. Boot a thin client. Install xrestop on the server. Log into client and open xrestop in a terminal. Then open Firefox while leaving xrestop visable. Browse standard less graphic intense websites and watch the cache usage as displayed by xrestop....not bad. Now hit this webpage:

http://www.carteretcountyschools.org/bms/teacherwebs/sdavenport/artgallery6.htm

You will see the X cache climb drastically until Firefox completely consumes all available RAM and Swap space at which point the client completely freezes and needs to be hard rebooted.

Now the example site above is a certain exaggeration of the problem. But it is a site that will certainly crash about every thin client system you try it on. As a workaround one can add massive amounts of RAM to the thin client (512MB+) or increase nbdswap to a very high amount (1GB+). However if a user hits the right site, the terminal will still crash. If a browser such as Opera or Konquerer are used and monitored with xrestop, you can hit the same site above and see very little increase, and a client with 128MB RAM and 32MB swap will run perfectly.

This is a bug that has apparently plagued mozilla based browsers for years and has not yet been resolved. Please remove this "feature" or at least provide a way to disable pixmap caching to X from within about:config such as "browser.cache.pixmap.enable user set boolean false". I have tried every other config option and none of them disable the pixmap caching.

Thanks,
Jim Kronebusch

Confirming... The thing is, the x11 pixmap is the _only_ place we're storing our
image data. I seem to recall some discussion about catching BadAlloc and
dragging all the image data back over to the client at that point before
retrying, but I can't recall what bug that was in....

bug 210931 is about trapping errors, but i don't think it was planning on doing this (sounds like a good idea, but first we have to deal w/ certain other problems...)

I have a same bug while using firefox on a X-Terminal with 64 MB of RAM under Ubuntu Dapper. When I load several pictures, the X-Terminal is getting slower and slower until freezing. The mouce does not answer as the keyboard (except SysRQ to reboot).

While attempting to reproduce the bug with gnome-system-monitor opened, the amount of X memory used by firefox is increasing each time a picture is loaded until it reach 60.5 MB, amount of memory used which makes the X-Terminal freezing.

You can reproduce this bug by loading a weather website @ http://www.meteociel.fr/modeles/gfs/vent-moyen/3h.htm and rolling the mouse over the left side of weather pictures under the word "Échéance" (a javascript script loads a picture each time you roll a link, and eats about 3 MB of X memory on our X-Terminal)

I tried to reproduce this bug under Epiphany ("epiphany-browser" debian package), which is based on firefox >=1.5 (according to "aptitude show" command) and the result is exactly the same : freeze.

bug 259672 seems related with this one, firefox seems to abuse the X memory...

this bug OS probably applies to all OS with X11, not just solaris... as there isnt any "any OS with X11", i would suggest changing it to all, at least isnt excluding all other X11 enabled OS

Created an attachment (id=279598)
moz-images-20070903.diff

I'm working on a patch to fix this. You can see the initial version of the patch here:
http://primates.ximian.com/~federico/news-2007-09.html#firefox-memory-1

The patch makes Firefox release the pixmaps used for JPEG images after a few seconds. It seems to work pretty well on my machine; my next step is to add support for this to the PNG and GIF decoders, and to take better measurements of how much memory we can save.

Binary package hint: firefox

Firefox caches pixmaps to X11. On a full workstation with plenty of RAM and tons of hard drive space, this probably wouldn't be seen as a bug. But when running thin clients or any type of diskless workstation this is a HUGE problem. For an example of what happens and to provide repeatability for testing here is some information.

Install Edubuntu 7.04. Boot a thin client. Install xrestop on the server. Log into client and open xrestop in a terminal. Then open Firefox while leaving xrestop visable. Browse standard less graphic intense websites and watch the cache usage as displayed by xrestop....not bad. Now hit this webpage:

http://www.carteretcountyschools.org/bms/teacherwebs/sdavenport/artgallery6.htm

You will see the X cache climb drastically until Firefox completely consumes all available RAM and Swap space at which point the client completely freezes and needs to be hard rebooted.

Now the example site above is a certain exaggeration of the problem. But it is a site that will certainly crash about every thin client system you try it on. As a workaround one can add massive amounts of RAM to the thin client (512MB+) or increase nbdswap to a very high amount (1GB+). However if a user hits the right site, the terminal will still crash. If a browser such as Opera or Konquerer are used and monitored with xrestop, you can hit the same site above and see very little increase, and a client with 128MB RAM and 32MB swap will run perfectly.

This is a bug that has apparently plagued mozilla based browsers for years and has not yet been resolved. Please remove this "feature" or at least provide a way to disable pixmap caching to X from within about:config such as "browser.cache.pixmap.enable user set boolean false". I have tried every other config option and none of them disable the pixmap caching.

Thanks,
Jim Kronebusch

David Tomaschik (matir) wrote :

This should really be reported to Mozilla: significant features are unlikely to be added by the Ubuntu team.

Dean Mumby (dean-mumby) wrote :

behaviour confirmed on ubuntu fiesty with ltsp5 using hp t5000 thin clients with 128mb ram

David Tomaschik (matir) wrote :

On another note, according to the Gecko/Firefox devs at Mozilla, version 3.0 (now in alpha) makes a significant difference in this behavior by automatically cleaning the cache.

I can report this to Gecko/Firefox as well then.

You note changes to version 3.0 to clean cache, this most likely won't help. I assume the cache will only be cleaned when the user leaves the cached page. The problem isn't cache building up over time and not being properly released. The problem is that it is cached at all. Firefox can consume enough memory on a single page while it is still being viewed to crash the client.

I also want to be sure the Ubuntu devels are aware of the problem as this will make it very hard for Edubuntu to gain a user base. If people install and edubuntu system and clients constantly freeze while users browse the web with any mozilla based browser, they are very likely to move on and discontinue use of Edubuntu.

I hope that by the Ubuntu devels realizing this problem Canonical will push the Mozilla devels for a fix. I figure Canonical has more clout than I do with Mozilla.

Jim

I have now appropriately filed a bug in bugzilla to help move this upstream to Mozilla developers:

https://bugzilla.mozilla.org/show_bug.cgi?id=395260

Jim

Frederico, could you post your latest patch from http://primates.ximian.com/~federico/news-2007-09.html#06 here?

Created an attachment (id=280070)
moz-images-20070906.diff

Updated patch.

The PNG loader works fine with this one. I just discovered some problems with the JPEG loader; JPEGs "disappear" when they get loaded in very small chunks. I think I know what's going on; hope to have a fix today.

Created an attachment (id=280146)
moz-images-20070907.diff

With this patch, both the JPEG and PNG decoders seem to work perfectly. I'd call this ready for inclusion, and leave GIF as an exercise for the reader :)

BTW, I'm pretty sure that this bug is present on all platforms, not just X11. Could someone with proper access please change the OS to "all"?

this stuff should probably have gone in to a new bug, but since we're here...

Hello, I had recently filed bug 395260 to find a way to disable pixmap caching to X. See below:

https://bugzilla.mozilla.org/show_bug.cgi?id=395260

There was a post the above bug to reference this bug. I see the link to the diff and have downloaded source from mozilla. I would like to test the diff to see if this fixes the pixmap issue or not, but do not know how to go about this. I have been trying to use patch but get many errors. If anyone could post some quick steps to do so I would love to do some testing and report back.

The pixmap problem makes firefox on thin clients very unstable (and firefox causes the clients to freeze hard after sucking up all available memory).

Thanks,
Jim Kronebusch
<email address hidden>

Can someone with sufficient permissions please mark bug #395260 as a duplicate of this one?

(In reply to comment #12)

> There was a post the above bug to reference this bug. I see the link to the
> diff and have downloaded source from mozilla. I would like to test the diff to
> see if this fixes the pixmap issue or not, but do not know how to go about
> this.

Download the Firefox sources here:
ftp://ftp.mozilla.org/pub/mozilla.org/firefox/releases/granparadiso/alpha7/source

Download the patch from attachment #280146

Unpack the source; cd into its toplevel directory.

Run this command:

  patch -p1 < patchfile

(where "patchfile" is the patch you downloaded).

Now compile Firefox as usual.

*** Bug 395260 has been marked as a duplicate of this bug. ***

Well I successfully built the granparadiso package with the suggested patch applied. Performance improved but pixmap cache was still allowed to spike and cause thin clients to freeze, but less often. Gavin McCullagh then suggested trying to tweak this section of the patch to decrease the amount of time the pixmaps are cached even further:

+static int
+get_discard_timer_ms (void)
+{
+ /* FIXME: don't hardcode this */
+ return 5000; /* 5 seconds */
+}

I went to the extreme side and reduced this to .2 seconds (return 200). Now every site that would have crashed the client in the past...did not. Cool. However performance of said sites went to horrible. But extremely slow loading of the offending sites is way better than hard freezes. Possible increase to 1 second cache may help. But the other 95% of average browsing seems to take no performance it at all and work quite nicely.

However non-gecko based browsers such as Opera and Konqueror do not seem to cache pixmaps in this fashion at all, and are still able to load the offending sites with no performance hit at all. I have no idea how this is done, but if Firefox could incorporate such methods this would be a much better fix.

Are the portions of the patch that reduce the amount of time pixmaps remain in system memory able to be patched into current releases such as 2.0.0.6? It looks like it will be some time until FF 3 is released and the ability to build a current release with this portion of the patch could solve a lot of headaches in the thin client world until FF 3 is released. If this is possible please let me know.

Federico, thanks for your work on this.

For others following this thread here is the method I was able to use on Ubuntu with success to build the patch into Firefox (from Gavin McCullagh):

1. Add gutsy source line to /etc/apt/sources.list if they do not already exist
     deb-src http://ie.archive.ubuntu.com/ubuntu/ gutsy main restricted universe multiverse
2. sudo apt-get build-dep firefox-granparadiso
3. apt-get source firefox-granparadiso
4. cd firefox-granparadiso-3.0~alpha7
a. cd debian/patches/
b. wget http://primates.ximian.com/~federico/misc/moz-images-20070907.diff
c. edit moz-images-20070907.diff if desired to increase/decrease discard timer
d. echo "moz-images-20070907.diff" >> series
e. cd ../../
5. debuild -uc -us
6. Remove the source

Jim

Not sure what the heck happened there :) If anyone can remove my double post feel free.

I was just thinking (which isn't always a good thing). Would it be possible to modify this portion of the patch:

+static int
+get_discard_timer_ms (void)
+{
+ /* FIXME: don't hardcode this */
+ return 5000; /* 5 seconds */
+}

and add some intelligence to it? Could this involve some sort of formula that looked at what percentage of RAM is available and set the time accordingly? Say I have a system with 128MB RAM total, and 15% of it is in use, then the formula would set the discard time to 20 seconds. Then on the next request it would look at the percentage used again and see that 50% was in use, now it sets the discard time to 5 seconds. Next request RAM is almost full at 80%, discard set to .1 seconds. Does that make any sense? This would still involve some hard coded thresholds but it would add a little intelligence to the discard time. This could help increase performance when RAM is available, but stay safe when it is used up.

But this would also require a check on every request for pixmap storage which may add overhead. Also I don't know if the discard timer is set on every request or if it is only set on startup of Firefox. And I don't know if the discard time is set per individual storage request or if it affects all items currently in storage.

So feel free to tell me I'm nuts. It was just a thought :-)

Jim

So won't this approach mean that any repaint after the image has timed out has to redecode the image? We should be pretty careful in tuning the timeout such that reading some text and then paging down doesn't mean that you have to redecode the images on the page (e.g. the backkground) on every pagedown...

I've tried this patch and for general browsing it works fantastically, however its seem to either crash or grind firefox to a halt if i try to open huge images, an example being nasa's blue marble:

http://earthobservatory.nasa.gov/Newsroom/BlueMarble/Images/BlueMarble_2005_SAm_09_4096.jpg

others here: http://earthobservatory.nasa.gov/Newsroom/BlueMarble/

it seems the timeout is shorter than it take for the image to load is it end up fighting against itself, it will eventually load if i download the image to disk and open it but it's still slow

(In reply to comment #20)
> So won't this approach mean that any repaint after the image has timed out has
> to redecode the image?

I think this is preferable to Firefox filling up all available physical memory
- swapping to disk hurts performance of the whole operating system more, than
having to redecode images (although I won't say it can't be annoying).

(In reply to comment #19)
> Could this involve some sort of formula that
> looked at what percentage of RAM is available and set the time accordingly?

There are cache memory limits (that are derived from total RAM available) that
should be used in such calculations, not really the physical ram size. See Bug
213391
(for those interested - theres also a testcase that makes FF's memory
usage jump really high almost instantly).

> I think this is preferable to Firefox filling up all available physical memory

Sure. All I'm saying is that it might be a good idea for the timeout to be on the order of minutes rather than seconds. That would still prevent filling up physical memory and evict images in truly inactive tabs, while not causing issues with common user activity.

Ideally, the timeout would be based on some evaluation of the speed of the processor and amount of available RAM, but that sounds hard and complicated.

If we really wanted to overengineer this, we would time how long the first decode takes (just the decode, so time spent while calling libpng/jpeg/etc), and then use that as input in when to evict that image, along with the amount of memory taken up by the decoded image. But that's probably a future addon patch.

Is it possible to make the use of a the discard timer a user settable option in
about:config? Maybe an option to enable a discard timer at all such as
browser.enable.image_discard true, and an option such as
browser.image_discard.time 5000 (in milliseconds)? This wouldn't be something
that necessarily everyone wants, so the ability to turn on/off wouldn't be a
bad thing and since most users are on high memory desktops this could even
default to off.

I still like the idea of if a discard timer is enabled to set the time based on
current memory usage.

Also it is probably worth looking into how Opera and Konqueror handle this as they do not seem to have the same problems.

Jim

Boris, this problem actually allows memory to be filled up within seconds! Install an application similar to xrestop and run it, then hit this website and watch how quickly RAM usage for pixmap storage is allowed to climb:

http://www.carteretcountyschools.org/bms/teacherwebs/sdavenport/artgallery6.htm

This is unacceptable and will crash a lower memory client in seconds (Linux thin clients are very unstable due to this). It should also be noted this is not just Firefox's fault. Although Firefox is the offending application in this case, the Xserver should be better equipped to not allow RAM usage to get to the point where the Xserver crashes. Firefox is also not the only offender in this area, other apps such as OpenOffice can be a problem as well.

To help deal with this problem farther up the line I have started a thread on the xorg mailing list:

http://lists.freedesktop.org/archives/xorg/2007-September/thread.html#28452

If there are any developers for Firefox who would like to help find a way to get the Xclients and Xserver to better communicate usage and set limits, feel free to join the discussion.

Jim

> then hit this website and watch how quickly RAM usage for pixmap storage is
> allowed to climb:

I don't see how timing out images within seconds will help that problem. A solid-color PNG image size scales basically as the log of the decoded image size. So a fairly small actual image can easily fill up whatever size memory you happen to have once decoded.

Frankly, I don't think bug 395260 is a duplicate of this bug. What's needed to fix this bug, in my opinion, is to evict "unused" decoded images, for some definition of "unused". A good one might be an image that has not had its actual image data accessed (via the DOM or by being painted or whatever) for some period of time. The "unused" qualifier is almost certainly needed to prevent unacceptable impact on DHTML, pageload, and scrolling performance.

What's needed to fix bug 395260, again in my opinion, is a hard cap on the amount of pixmap storage we'll allocate. That should be a separate configuration option from the fix for this bug, and someone running Gecko in an environment where server memory is constrained could use that option. In such an environment, as you have said, a severe performance hit, or even not seeing the image at all, is much more acceptable than crashing the X server.

I do think that we should consider defaulting this hard cap to some fraction of the total physical memory in the machine. Not rendering images at all seems preferable to having to swap to paint them. But again, I think that would best be handled in bug 395260.

Boris, if there is a way to do what you suggest above that would be great. Federico's patch to discard images after a specified amount of time does increase stability on the thin clients, but it still does not prevent abuse of pixmap storage that can possibly crash the Xserver. A cap on the maximum amount able to be used would be a better fix. But if Federico's patch could be backported to a current release of Firefox we would at least have a solution for the interim.

I think the bug I started of 395260 is definitely different that the reason this bug was originally started. But a resolution to this bug could also help bug 395260. I would however still love to see a solution that could provide a user settable option in about:config to limit allocation for pixmap storage.

I am not sure who the actual developers are of Firefox and if they know about this problem. If anyone knows please help make sure they are aware of this.

Thanks,
Jim

Jim, Boris happens to be a big one ;)

(these patches should have gone in to a new bug not this one, but here we are.)

Capping the amount of pixmaps allocated is trivial (assuming we do proper accounting for our pixmaps and don't care about other pixmaps that other apps may have created). That should be separate from this bug. If someone cares feel free to reopen that bug and I'll be happy to explain to someone what needs to be done in that code.

This bug is about discarding uncompressed image data. For doing that it doesn't matter how that data is stored be it in a system object or on the heap. Please separate out the issues. Pixmaps are X11 specific and this is a very cross-platform bug.

As for how long we should wait before discarding data I think that is up in the air. I tend to agree with Boris that we probably want to wait about a minute to purge things not in use. This should have the least impact on performance.

There are many additional things that could be done once this patch is finished and landed, but they really aren't directly related to this bug and should probably live elsewhere.

Marked as in progress because it is being handled upstream (https://bugzilla.mozilla.org/show_bug.cgi?id=395260)

Changed in firefox:
assignee: nobody → mozilla-bugs
importance: Undecided → High
status: New → In Progress
Changed in firefox:
status: Unknown → Confirmed
Jordan Erickson (lns) wrote :

Just wanted to add that we are experiencing this same problem (6-school Ubuntu/LTSP5/FF install). A fix would be pretty nice, because this issue is causing our thin clients to hard-lock constantly with regular usage. Not very fun for the students, and the image of Linux/Open source projects, either.

Will track the Mozilla dev bugzilla, as well. Thanks for everyone's hard work.

Marc Tardif (cr3) wrote :

By caching pixmaps to X11, it is possible that Xorg might crash the system or render it unusable. This occured in an LTSP context and the workaround was to add the following lines to the script calling startx. This essentially sets a ulimit on Xorg to about 80% of the memory:

X_RAMPERC=${X_RAMPERC:-100}

if [ ${X_RAMPERC} -lt 100 ]; then
    XMEM=0
    while read TYPE VALUE UNITS; do
        case ${TYPE} in
            MemFree:|SwapFree:)
                XMEM=$((${XMEM} + ${VALUE}))
                ;;
        esac
    done < /proc/meminfo
    XMEM=$((${XMEM} * ${X_RAMPERC} / 100))

    ulimit -m ${XMEM}
fi

Please note this is only a workaround and might not be appropriate in every context. However, I thought it might be useful to share this knowledge.

Thanks for your comments Mark. I worked with Scott Balnaeves in #ltsp and he ported the old X_RAMPERC from LTSP4 to LTSP5 for me. I did post the instructions for implementing X_RAMPERC to the edubuntu-users list (didn't think of posting here) https://lists.ubuntu.com/archives/edubuntu-users/2007-September/thread.html#1850 (There needs to be some changes to lts.conf as well, In playing I found that setting X_RAMPERC=80 to be about the best) . Firefox is also doing a ton of work on this some of which can be referenced in these two bug posts:

https://bugzilla.mozilla.org/show_bug.cgi?id=395260
https://bugzilla.mozilla.org/show_bug.cgi?id=296818

Good news is that it looks like Firefox 3 will have a bunch more features to work with pixmap cache and be very easy to tailor to thin clients, but that is out a few months yet.

I also brought this up on the xorg mailing list in this thread:

http://lists.freedesktop.org/archives/xorg/2007-September/thread.html#28477

This is also a problem with xorg in the fact that it should not let any client consume it's resources and crash. The thread has a bunch of good thoughts on future improvements, though I don't think they have decided to make any permanent changes (Other apps can crash Xorg as well such as OpenOffice, so more than just FF needs a fix).

Also here is the thread that started things on edubuntu-users:

https://lists.ubuntu.com/archives/edubuntu-users/2007-September/thread.html#1825

Sorry I forgot about this bug report and did not post back any of the progress. Also it should be noted that this is not a problem specific to Ubuntu, but all Linux distributions running Firefox, and also to a handful of other apps when running under xorg.

Jim

Created an attachment (id=284696)
updated to trunk + cleanup

this updates things to the trunk and cleans up several things.

We'll want to do several follow-on patches including adding support for this to the GIF decoder. We'll probably want to tweak the timeout (currently set to 45 seconds), possibly making it dynamic or make us hold on to images on the current page longer. There are a lot of options.

Created an attachment (id=284708)
better fix

This removes the temp buffer accrual in the JPEG decoder and makes us create the image container object earlier on.

SNIP from comment #31
We'll probably want to tweak the timeout (currently set to 45
seconds)
END SNIP

Is it possible that any future patches that incorporate a discard timer also allow the time to be set by the user in about:config? Then the time can be a general number that should be good for most circumstances, but allow tweaking by users for a particular need.

Thanks,
Jim

(From update of attachment 284708)
Looks fine, only issue I have is that AddToRestoreData is using realloc() for every chunk of data that comes in, with just the length of the new piece added -- so we'll end up realloc'ing every time we get data from the network, which sucks a little. It would be better to keep increasing this buffer -- start it at 4k and keep doubling up to 128k or something, and then increase by 128k at a time... when you're done reading you can always realloc it back down if there's too much slop.

Created an attachment (id=285005)
use nsTArray for handling the buffer

this will fix the reallocs by using nsTArray to manage the buffer and then compacting it at the end.

I just filed several followup bugs for this one including adding GIF support, making the timeout a pref and adding xlib logging. If there are other followup issue you would like to see I suggest filing additional bugs and we'll go from there.

checked in -- marking this FIXED.

regression from 20071015_1517_firefox-3.0a9pre.en-US.win32.zip

check-in for this or bug 399863 cause crash.
http://crash-stats.mozilla.com/report/index/6c1ddbca-7b80-11dc-901b-001a4bd43ef6
(URL is wrong, correct URL is http://blog.livedoor.jp/ayase_shiori/)

Created an attachment (id=285026)
fix for png crash

this moves the init of the image container for the png decoder to init and lazily inits it -- this code is basically the same as that in the jpeg decoder.

fixed, thanks.

We should get a test at least for the crash from comment 38 and for any other stuff mentioned earlier in the bug (too lazy to check now to see if anything else got mentioned).

I've backed this out in the ongoing investigation into the current Tp regression

This appears to have re-landed so I'm not sure why it isn't fixed. And it also appears to have caused around a 1% Tp regression on the Linux box and some on one of the talos boxes. I recall the suggestion that this patch was worth it but don't see any confirmation of this anywhere.

This has caused bug 400403

This is certainly not really proven technology...

As far as I understand, this patch makes it that the 'undecoded' data as read from the stream, is also stored in memory. So, this will initially use more memory, until the timeouts kick in and start destroying the 'decoded' data (for JPG/PNG's, this is the image data itself).

I think that this storage of 'undecoded' data needs to coordinated with the normal caching (disk/memory), as it could better to just reload from the diskcache, and not allocate additional internal memory.

it certainly could be better, maybe, to read from the diskcache -- it could also be significantly slower. Worth trying though in a separate bug.

Did this help memory use according to task manager or similar?

/be

I see a definitive reduction in both memory usage and the number of GDI objects, 45 seconds after the page is loaded. And when you force the images back in (not refreshing, but redisplaying after they're scrolled away, and were then purged by the timer), both numbers go back up to their top, and will then go back down (45 sec later) to their minimum again.

There is some definite memory reduction, but since most of the images on the web (especially forums) are GIFs, bug 399925 needs to be fixed for an even greater memory use reduction.

Bug 51028 has some numbers after running the stress test there; on OS X my VM size dropped by about 400MB.

Has anyone been able to successfully apply a version of this patch to a current flavor of the FF 2.x codebase? The current patch and all earlier versions appear to be only for the latest trunk or an earlier version of FF3-alpha, which doesn't do much for the rest of the world that still needs to run FF2 in a production environment.

The only things landing on the 2.0 branch are stability, security or regression fixes. This patch falls way outside what would be accepted for the branch.

Well, although invasive, I'd classify this as a stability fix myself.

Regardless, even if not acceptable in mainline, a patch that could be applied to 2.x would be extremely useful. I think the problem was that it relied on some infrastructure that didn't exist in ff2.

In general, the image decoders have changed a lot between Fx2 and Fx3 due to the change to the Cairo rendering backend. I'm willing to bet that the patch in this bug wouldn't even have a prayer of working in the non-Cairo world. The rendering backends at this point in time are essentially apples and oranges.

> Well, although invasive, I'd classify this as a stability fix myself.

"stability" means crashes and hangs. Particularly large memory leaks _may_ qualify. This but just doesn't fit those criteria.

Note that this improves the testcase from bug 213391 quite a bit (if you wait 45s for the caches to clear). Now that we have this is it easy to detect if we are blowing through our in-mem cache and only keep images that are being painted in ram?

> "stability" means crashes and hangs. Particularly large memory leaks _may_
> qualify. This but just doesn't fit those criteria.

Currently, firefox crashes on me every 4 or 5 days under heavy usage (generally 4+ windows showing a rotating set of 50+ tabs, which I admit is not typical of 'normal' usage). Generally, all the firefox windows disappear, and then firefox-bin pegs the cpu for hours until I kill -9 the process (which might be a (seperate) bug). I have experimented with clean profiles, removing the flash plugin, to no avail.

I've been running a cvs build of minefield/3.0a9pre for just under two weeks, and haven't had a single crash yet, and xrestop is showing mem usages that are 20-30 times smaller (16-20mb instead of half a gig of pxm mem).

This is why I'd classify it as a stability fix: it causes crashes. That said, I know it's not an issue for most people under normal use (although I'm not the only person who browses like this)

For what it's worth, I usually have 30+ windows open, with 10-20 tabs in each. And I don't generally run into the sort of memory or crash issues you're describing.

Perhaps it's worth figuring out what exact sites are involved and seeing whether there's something going on other than the image memory usage?

I'll try to reproduce it again (although I've grown somewhat attached to 3.0a9pre). Suggestions as to what logs I should be keeping?

Some idea of what sites in particular cause the memory usage, and better yet the crashes...

If you end up with the browser in a loop, kill it with -SEGV, not -KILL. That should generate a talkback report that might help understand that issue.

What originally put this issue on my radar was a site that included a 2MB JPEG image (over 41 megapixel, poster size) that had been scaled down to thumbnail size. The pixmap decompression to display caused the resources for this image to balloon up to 160MB of ram. Couple this with the fact that pixmap resources were not being freed, and you can calculate on one hand how many times this page could be reloaded before it was blowing out all available memory, including swap.

Unfortunately, I can't provide a link because it came from an inventory page which required an account login.

*** Bug 327280 has been marked as a duplicate of this bug. ***

Changed in firefox:
status: Unknown → Fix Released
Alexander Sack (asac) wrote :

fixed in ffox 3 and wont fix in ffox2

Changed in firefox:
status: In Progress → Won't Fix

This patch causes crash. Please see Bug 441563.
Sorry, I cannot modify dependency list.

This patch changed imgIContainer without changing the iid. See bug 475897.

Bug 399926 is not depending on this one.

Changed in firefox:
importance: Unknown → Medium
To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.