Categories
about:memory Firefox Memory consumption MemShrink

MemShrink progress, week 6

Dave Hunt wrote this week about some endurance tests comparing Firefox 4, 5, 6, 7, and 8, which show how Firefox’s memory usage is improving.  Pay most attention to the ‘resident’ numbers for Firefox 6, 7 and 8, which indicate how much physical memory is being used.  The ‘explicit’ numbers show a big drop between 6 and 7, and then a rise between 7 and 8, but they are suspect, so don’t panic yet.

about:memory improved again this week.  I added a reporter for each compartment’s property table, and two reporters js-compartments-system and js-compartments-user which count how many compartments are present. These latter two will be added to telemetry soon.

I also added a reporter that computes the fraction of the JS heap that is unused, called js-gc-heap-unused-fraction.  The results are surprising — numbers like 30% and 50% are common.  (Some preliminary JS heap visualization helps explain why.)  There are some suggestions for short-term improvements to mitigate this (e.g. smaller GC chunks).  However, I suspect that to fix it properly will require a compacting garbage collector — the current mark and sweep algorithm unavoidably leaves lots of holes all over the heap each time it runs.  But I could be wrong!  GCs have a large design space and there may be other solutions.  A non-compacting generational GC would help in its own way too — fewer objects would reach the general heap and so each full heap collection would likely collect less garbage, and thus leave fewer holes.

Here’s this week’s bug count:

  • P1: 30 (+4)
  • P2: 48 (-1)
  • P3: 33 (+0)
  • Unprioritized: 8 (+2)

It’s nice to see the P2 count go down!  Note that bugs remain unprioritized for two reasons.  First, we don’t always get through all the unprioritized bugs in the MemShrink meeting.  Second, sometimes we ask for more investigation or data before assigning a priority.

On the topic of bugs, I closed the meta-bugs for tracking leaks against Firefox releases, and the one for tracking non-leak memory use reductions, as I mooted last week.  I kept open the one for improving memory-related tools because multiple people though it was useful.

Categories
about:memory compartments Firefox Memory consumption MemShrink

MemShrink progress, week 5

There were four main areas of MemShrink progress this week.

Killing zombie compartments

A zombie compartment is one that outlives its tab.  Kyle Huey fixed a very common class of short-lived zombie compartments caused by XmlHttpRequests.  These zombie compartments only lived for a couple of minutes, but this is an important fix because it makes makes memory usage follow tab usage better, and also makes detecting longer-lived zombie compartments much easier.

Alon Zakai has also been doing heroic work tracking down a zombie compartment related to web workers exhibited at coffeekup.org.  I’m sure he’d appreciate additional help from anyone who might be able to provide it.

Improving about:memory

Lots of progress here.

  • I changed the per-compartment memory reporters so that multiple system compartments with the same name are reported separately, instead of being merged.  This made it obvious that JetPack-based add-ons can have dozens of system compartments.  I don’t know if anybody realized this previously, and it’s something of a concern.  The compartments are currently distinguished in about:memory only by their address, as the following example shows.  system principal including its address
    It would be great to add some identifying information that indicates what part of the code is responsible for creating each system compartment.  That might even give us some much-needed per-add-on accounting.
  • Andrew McCreight added a memory reporter for the cycle collector.
  • I added a memory reporter for the Necko (network) memory cache.
  • Justin Lebar fixed a problem with the image memory reporters, but it bounced due to a possible Tp5 RSS/PrivateBytes regression on Mac.  This was surprising, maybe it was just noise?
  • I changed about:memory so that if there are multiple memory reporters with the same name, you can tell they were merged into a single entry.  For example, in the following screenshot you can easily tell that there were four separate connections to the places database.
    places entry indicating duplication

Acting on memory pressure

Justin Lebar made some great progress on triggering memory pressure events when physical or virtual memory is low on Windows.  See comments 28 and 29 on that bug for some nice graphs showing how this kept Firefox responsive when browsing some very image-intensive sites.  This kind of adaptive behaviour is really important for getting Firefox to behave well on the full range of supported devices, from smartphones all the way up to desktop machines with lots of RAM.  Go Justin!

Tweaking MemShrink processes

The MemShrink wiki page used to be vague about the project’s goals.  So this week I made it more precise.

[The] goal is to get the number of MemShrink P1 bugs down to zero. That will mean that all the bad leaks will have been fixed, and also the important auxiliary work (e.g. infrastructure to detect regressions) will be in place.

As a result, Jeff Muizelaar changed areweslimyet.com to point to the list of open MemShrink P1 bugs.  It previously pointed to a Tp5 Talos graph.

On a related noted, we have six open meta-bugs for tracking leaks against Firefox releases, one for tracking non-leak memory use reductions, and one for improving memory-related tools.  (These are listed on the wiki.)  I created these bugs before MemShrink meetings started.  But now that we are using the MemShrink whiteboard annotations assiduously, these tracking bugs don’t seem necessary — having two tracking mechanisms is overkill.  In particular, I think their dependencies aren’t being updated consistently.  So I propose to stop using them and close them.  If you have any objections, please let me know and I’ll reconsider.  If I do close them, I’ll make sure that all bugs blocking them have a MemShrink annotation so they won’t fall through the cracks.

And that segues nicely into the MemShrink bug count for this week:

  • P1: 26 (+2)
  • P2: 49 (+0)
  • P3: 33 (+4)
  • Unprioritized: 6 (+4)

Like last week, this increase mostly reflects the fact that people are coming up with new ideas for improvements.

Finally, thanks to Jesse Ruderman for taking minutes at this week’s meeting (and the previous two).

Categories
compartments Firefox Memory consumption MemShrink

Zombie compartments! Recognize and report them. Stop the screaming.

Update (November 30, 2011): I wrote a wiki page about zombie compartments.  It’s much clearer than this post, you should read it instead.

Update (July 31, 2011): This blog post has been linked to from mozilla.org‘s front page.  Although any help in improving Firefox’s memory usage is very welcome, please note that this post was aimed at Firefox developers and other technically-inclined users, and I wasn’t expecting its existence to be publicized so widely.   Furthermore, the per-compartment reporters that help identify zombie compartments were only added to Firefox 7 (currently in the Aurora channel), and several existing bugs that cause zombie compartments have been subsequently fixed in the Firefox 8 development code.  This means the hunting of zombie compartments is a sport best left to those who either are using Nightly builds or their own development builds of Firefox.

Firefox’s JavaScript memory is segregated into compartments.  Roughly speaking, all memory used by JavaScript code that is from a particular origin (i.e. website) goes into its own compartment.  Firefox’s own JavaScript code also gets one or more compartments.  Compartments improve security and memory locality.

Per-compartment memory reporters allow you to look at about:memory to see what compartments are present.  Once you close a tab containing a web page, all the compartments associated with that web page should disappear.  (But note that they won’t necessarily disappear immediately;  garbage collection and/or cycle collection has to run first.)

Sometimes this doesn’t happen and you end up with a Zombie Compartment.  This shouldn’t happen, and it indicates a bug.  It also makes children and 1950s B-movie actresses scream.

 1950s B-movie woman screaming

If you notice Zombie Compartments while browsing, please report them to The Authorities.  Here are some steps you can follow when reporting one that will increase the chance it’ll be hunted down.

  • First, you should use about:memory?verbose for diagnosis.  You want the “?verbose” suffix (which you can also get to by clicking the “More verbose” link at the bottom of about:memory) otherwise small compartments might be omitted.
  • Second, beware that many sites utilize scripts from other origins.  Scripts from Google, Facebook and Twitter are particularly common. This means that the most reliable diagnosis of a Zombie Compartment occurs if you do the following: start Firefox anew, open about:memory?verbose and one other tab, then close that other tab, then hit “minimize memory usage” at the bottom of about:memory?verbose several times to force multiple garbage and cycle collections.  (Sometimes hitting it once isn’t enough, I’m not sure why.)  If the compartment remains, it’s very likely a Zombie Compartment.
  • After that, try waiting a while, say 10 or 20 minutes, then try the “minimize memory usage” button again.  Some Zombie Compartments stick around for a limited time before disappearing;  others are immortal, and it’s useful to know which is which.
  • Some Zombie Compartments are caused by add-ons.  So if you have add-ons enabled, please try to reproduce in safe mode, which disables them.  If you can identify, by disabling them one at a time, a single add-on that is responsible, that is extremely helpful.  Zombie compartments that are caused by add-ons are definitely interesting, but their importance depends on the popularity of the add-on.
  • Finally, please file a bug that includes all the information you’ve gathered, add “[MemShrink]” to its whiteboard, and mark it as blocking bug 668871.  Attaching the full contents of about:memory?verbose is very helpful.  See bug 669545 for an example.

Please, stop the screaming.  Report Zombie Compartments to The Authorities.

Categories
Memory consumption MemShrink Performance

Building a page fault benchmark

I wrote a while ago about the importance of avoiding page faults for browser performance.  Despite this, I’ve been focusing specifically on reducing Firefox’s memory usage.  This is not a terrible thing;  page fault rates and memory usage are obviously strongly linked.  But they don’t have perfect correlation.  Not all memory reductions will have equal effect on page faults, and you can easily imagine changes that reduce page fault rates — by changing memory layout and access patterns — without reducing memory consumption.

A couple of days ago, Luke Wagner initiated an interesting email conversation with me about his desire for a page fault benchmark, and I want to write about some of the things we discussed.

It’s not obvious how to design a page fault benchmark, and to understand why I need to first talk about more typical time-based benchmarks like SunSpider.  SunSpider does the same amount of work every time it runs, and you want it to run as fast as possible.  It might take 200ms to run on your beefy desktop machine, 900ms on your netbook, and 2000ms to run on your smartphone.  In all cases, you have a useful baseline against which you can measure optimizations.  Also, any optimization that reduces the time on one device has a good chance of reducing time on the other devices.  The performance curve across devices is fairly flat.

In contrast, if you’re measuring page faults, these things probably won’t be true on a benchmark that does a constant amount of work.  If my desktop machine has 16GB of RAM, I’ll probably get close to zero page faults no matter what happens.  But on a smartphone with 512MB of RAM, the same benchmark may lead to a page fault death spiral;  the number will be enormous, assuming you even bother waiting for it to finish (or the OS doesn’t kill it).  And the netbook will probably lie unhelpfully on one side or the other of the cliff in the performance curve.  Such a benchmark will be of limited use.

However, maybe we can instead concoct a benchmark that repeats a sequence of interesting operations until a certain number of page faults have occurred.  The desktop machine might get 1000 operations, the netbook 400, the smartphone 100.  The performance curve is fairly flat again.

The operations should be representative of realistic browsing behaviour.  Obviously, the memory consumption has to increase each time you finish a sequence, but you don’t want to just open new pages.  A better sequence might look like “open foo.com in a new tab, follow some links, do some interaction, open three child pages, close two of them”.

And it would be interesting to run this test on a range of physical memory sizes, to emulate different machines such as smartphones, netbooks, desktops.  Fortunately, you can do this on Linux;  I’m not sure about other OSes.

I think a benchmark (or several benchmarks) like this would be challenging but not impossible to create.  It would be very valuable, because it measures a metric that directly affects users (page faults) rather than one that indirectly affects them (memory consumption).  It would be great to use as the workload under Julian Seward’s VM simulator, in order to find out which parts of the browser are causing page faults.  It might make areweslimyet.com catch managers’ eyes as much as arewefastyet.com does.  And finally, it would provide an interesting way to compare the memory “usage” (i.e. the stress put on the memory system) of different browsers, in contrast to comparisons of memory consumption which are difficult to interpret meaningfully.

 

Categories
Firefox Memory consumption MemShrink

MemShrink progress, week 4

Firefox 7 is currently in the Aurora channel.  Its memory usage improvements have been getting a lot of attention this week, with many people reporting the 30% improvement claims from the official blog post.  I was worried about the post claiming a specific percentage improvement, because there are various ways to measure memory usage, and it varies so greatly depending on the workload, but I haven’t seen anybody dispute it so far.  In fact, the only non-Mozilla measurement I saw was one where a reviewer  opened 117 bookmarks at once (one bookmark per tab) and saw a 40% reduction in private bytes on Windows!  This was a pleasant surprise as we were expecting improvements mostly when users (a) closed lots of tabs, or (b) left the browser idle for a long time.

The most significant MemShrink-related landings this week were things that were subsequently backed out.  Paul Biggar landed jemalloc support for Mac, then had to revert it due to some crashes and possible memory usage regressions.  This support has ended up being an enormous hassle, and Paul’s been tirelessly battling various Mac OS X quirks and annoyances over a period of several months.  An earlier appraisal that “It should be pretty trivial… I don’t think any of this would be a ton of work. Maybe a week of trying a few different things and running it through our unit tests?” turned out to be spectacularly wrong.  Hopefully Paul will be able to fix the problems and re-land soon.

Brian Hackett merged his JavaScript type inference from the jaegermonkey repository to the tracemonkey repository.  Type inference speeds up computationally-intensive JavaScript code significantly, but unfortunately it increased memory usage a lot, due to (a) the memory required for the analysis itself, and (b) some extra memory used by the executing JavaScript code.  As a result, Brian had to back it out from the tracemonkey repository.  He’s now making good progress on reducing the overhead.  One reason that this happened is that the jaegermonkey repository gets very little use, so the problem wasn’t noticed until it landed on the tracemonkey repository which gets more use.  So if you want to help Brian out, please try browser builds from the jaegermonkey repository.

On the tools front:

  • Per-compartment memory reporters have found several cases of “zombie compartments”, which is when a tab is closed but one or more of its compartments stay around.  Bug 668871 is tracking these leaks;  if you see any like this please report them there.
  • Benoit Jacob landed some memory reporters for WebGL, which will show up in about:memory, and I fixed a problem that was causing JavaScript typed arrays to erroneously fall into the “heap-unclassified” bucket in about:memory.
  • Speaking of which, that “heap-unclassified” number is still higher than we’d like, often in the 35–45% range.  If you see a particular site that causes “heap-unclassified” to go unusually high, please report it here.

Finally, here’s the MemShrink bug count, with the changes since last week:

  • P1: 24 (+6)
  • P2: 49 (+5)
  • P3: 29 (+4)
  • Unprioritized: 2 (+0)

The increases look bad, but I think that’s not because progress isn’t being made.  Rather, it’s a reflection that MemShrink efforts are still ramping up, and people are coming up with lots of new ideas and filing bugs for them.  The upwards trend will probably continue for several more weeks.

Categories
Firefox Memory consumption MemShrink

MemShrink progress, week 3

This was the final week of development for Firefox 7, and lots of exciting MemShrink stuff happened.

Per-compartment memory reporters

I landed per-compartment memory reporters, using the new memory multi-reporter interface.  I’ve written previously about per-compartment reporters;  the number of things measured has increased since then, and each compartment now looks like this:

Output of a per-compartment memory reporter

One nice thing about this feature is that it gives technically-oriented users a way to tell which web sites are causing high memory usage.  This may help with perception, too;  people might think “geez, Facebook is using a lot of memory” instead of “geez, Firefox is using a lot of memory”.

Another nice thing is that it can help find leaks.  If you close all your tabs and some compartments are still around, that’s a leak, right?  Actually, it’s more complicated than that:  compartments are destroyed by the garbage collector, so there can be a delay.  But there are buttons at the bottom of about:memory which force GC to occur, so if you press them and some compartments are still around, that’s a leak right?  Well, it appears that sites that use timers can stick around while the timers are still live.  For example, TBPL appears to have a refresh timer that is 2 or 3 minutes long, so it’s compartment stays alive for that long after the page is closed.  (Actually, it’s not entirely clear if or why that should happen.)  However, once you’ve accounted for these complications, any remaining compartments are likely to have leaked.

Another thing to note is that the JS heap space used by each compartment is now reported.  Also, the fraction of the JS heap that isn’t currently used by any particular compartment is also reported:

Output of non-compartment GC memory reporters

Once this change landed, it didn’t take long to see that the “gc-heap-chunk-unused” number could get very large.  In particular, if you browsed for a while, then closed all tabs except about:memory, then triggered heap minimization (via the buttons at the bottom of about:memory), you’d often see very large “gc-heap-chunk-unused” numbers.  It turns out this is caused by fragmentation.  The JS heap is broken into 1MB chunks, and often you’d end up with with a lot of almost-empty chunks, but they’d have a small number of long-lived objects in them keeping them alive, and thus preventing them from being deallocated.

Reduced JavaScript heap fragmentation

Fortunately, Gregor Wagner had already anticipated this, and he tightened his grasp on the 2011 MemShrink MVP award by greatly reducing this fragmentation in the JavaScript heap.  Parts of Firefox itself are implemented in JavaScript, and many objects belonging to Firefox itself (more specifically, objects in the “system principal” and “atoms” compartments) are long-lived.  So Gregor added simple chunk segregation;  objects belonging to Firefox get put in one group of chunks, and all other objects (i.e. those from websites) get put in a second group of chunks.  This made a huge difference.  See this comment and subsequent comments for some detailed measurements of a short browsing session;  in short, the size of the heap was over 5x smaller (21MB vs. 108MB) after closing a number of tabs and forcing garbage collection.  Even if you don’t force garbage collection, it still helps greatly, because garbage collection happens periodically anyway, and longer browsing sessions will benefit more than shorter sessions.

This change will help everyday browsing a lot.  It will also help with the perception of Firefox’s memory usage — once you learn about about:memory, an obvious thing to try is to browse for a while, close all tabs, and see what the memory usage looks like.  Prior to this patch, the memory usage was often much higher than it is on start-up.  With this patch, the usage is much closer to the usage seen at start-up.  Ideally, it would be the same, and Justin Lebar opened a bug to track progress on exactly this issue.

There’s still room for reducing fragmentation in the JavaScript heap further.  Kyle Huey has some good ideas in this bug.

Other bits and pieces

  • Chris Pearce finished a major refactoring of media code.  Previously, every media (<audio> and <video>) element in a page required three threads (four on Linux).  And each thread has a stack.  On Linux, the stack is 8MB, on Windows it’s 1MB, on Mac it’s 64KB.  (Edit: Robert O’Callahan pointed out in the comments that this space is not necessarily committed, which means they’re not really costing anything other than virtual memory space.)  Webpages can have dozens, even hundreds of media elements, so these stacks can really add up.  Chris changed things so that each media element has a single thread.  This really helps on complicated web sites/apps like The Wilderness Downtown, ROME, and WebGL Quake.  (There is also a bug open to reduce the thread stack size on Linux;  8MB is excessive.)  Chris only just landed these patches on mozilla-inbound, he deliberately waited until after the FF7 deadline to maximize testing time.  Assuming there are no problems, this improvement will be in FF8.
  • Jesse Ruderman tweaked some of his fuzzing tools to look for leaks and found four new ones.
  • Mounir Lamouri landed some memory reporters for the DOM.  They’re currently disabled while he extends them and makes that they’re reporting sensible numbers.
  • I created mlk-fx8, a bug for tracking leaks and quasi-leaks reported against Firefox 8.

Quantifying progress

I’m going to attempt to quantify MemShrink’s progress in these weekly reports.  It’s not obvious what’s the best way to do this.  The MemShrink wiki page discusses this a bit.  I’m going to do something crude:  just count the number of bugs annotated with “MemShrink”, and track this each week.  Here are the current counts:

  • P1: 18
  • P2: 44
  • P3: 25
  • Unprioritized: 2

Obviously, this is a flawed way to measure progress;  for example, if we improve our leak-detection efforts, which is a good thing, we’ll get more bug reports.  But hopefully these numbers will trend downwards in the long term.

I’m also wondering if my choice to have three priority levels was a mistake, if only for the reason that people like to pick the middle item, so P2 will always dominate.  If we had four priority levels, we’d be forced to prioritize all those middling bugs up or down.

The minutes of today’s meeting are available here.  Thanks to Jesse for taking them.

Categories
Memory consumption MemShrink

MemShrink progress, week 2

Here are some potted highlights from the second week of MemShrink.

  • The beauty of Gregor Wagner’s time-based GC trigger, which I mentioned last week, became ever evident.  It’s been confirmed that nine other bug reports are fixed by this one change.  Great stuff!  See this comment and subsequent comments for an analysis.  Unfortunately, the release drivers decided it wasn’t suitable for back-porting to Firefox 6.  Users who leave their browser open overnight should find Firefox 7 more to their liking, once it’s available.
  • I landed lazy initialization of TraceMonitors and JaegerCompartments, something I wrote about previously.  This gave a 2.89% reduction in the Trace Malloc MaxHeap measurement on CentOS (x86_64) release 5, and probably similar reductions on other platforms.
  • Justin Lebar added hard and soft page fault counts to about:memory.  This makes it clear if Firefox is paging badly.
  • Dão Gottwald fixed a leak in the iQ library used by Panorama.  This bug could cause entire tabs to be leaked in some circumstances.
  • Blake Kaplan avoided the creation of some sandboxes.  This seems to have reduced the number of JavaScript compartments required for techcrunch.com.

There’s less than a week until the development period for Firefox 7 ends.  Hopefully we’ll see some more good MemShrink fixes before that happens.

Categories
about:memory JägerMonkey Memory consumption Tracemonkey

You make what you measure

Inspired by Shaver’s patch, I implemented some per-compartment stats in about:memory.  I then visited TechCrunch because I know it stresses the JS engine.  Wow, there are over 70 compartments!  There are 20 stories on the front page.  Every story has a Facebook “like” button, a Facebook “send” button, and a Google “+1” button, and every button gets its own compartment.

That sounds like a bug, but it’s probably not.  Nonetheless, every one of those buttons had an entry in about:memory that looked like this:

Old compartment measurements

(The ‘object-slots’ and ‘scripts’ and ‘string-chars’ measurements are also new, courtesy of bug 571249.)

Ugh, 255,099 bytes for a compartment that has only 971 bytes (i.e. not much) of scripts?  Even worse, this is actually an underestimate because there is another 68KB of tjit-data memory that isn’t being measured for each compartment.  That gives a total of about 323KB per compartment.  And it turns out that no JIT compilation is happening for these compartments, so all that tjit-data and mjit-code space is allocated but unused.

Fortunately, it’s not hard to avoid this wasted space, by lazily initializing each compartment’s TraceMonitor and JaegerCompartment.  With that done, the entry in about:memory looks like this:

New compartment measurements

That’s an easy memory saving of over 20MB for a single webpage.  The per-compartment memory reporters haven’t landed yet, and may not even land in their current form, but they’ve already demonstrated their worth.  You make what you measure.

Categories
Firefox Memory consumption Performance Tracemonkey

You lose more when slow than you gain when fast

SpiderMonkey is Firefox’s JavaScript engine.  In Firefox 3.0 and earlier versions, it was just an interpreter.  In Firefox 3.5, a tracing JIT called TraceMonkey was added.  TraceMonkey was able to massively speed up certain parts of programs, such as tight loops;  parts of programs it couldn’t speed up continued to be interpreted.  TraceMonkey provided large speed improvements, but JavaScript performance overall still didn’t compare that well against that of Safari and Chrome, both of which used method JITs that had worse best-case performance than TraceMonkey, but much better worst-case performance.

If you look at the numbers, this isn’t so surprising.  If you’re 10x faster than the competition on half the cases, and 10x slower on half the cases, you end up being 5.05x slower overall.  Firefox 4.0 added a method JIT, JaegerMonkey, which avoided those really slow cases, and Firefox’s JavaScript performance is now very competitive with other browsers.

The take-away message:  you lose more when slow than you gain when fast. Your performance is determined primarily by your slowest operations.  This is true for two reasons.  First, in software you can easily get such large differences in performance: 10x, 100x, 1000x and more aren’t uncommon.  Second, in complex programs like a web browser, overall performance (i.e. what a user feels when browsing day-to-day) is determined by a huge range of different operations, some of which will be relatively fast and some of which will be relatively slow.

Once you realize this, you start to look for the really slow cases. You know, the ones where the browser slows to a crawl and user starts cursing and clicking wildly and holy crap if this happens again I’m switching to another browser.  Those are the performance effects that most users care about, not whether their browser is 2x slower on some benchmark.  When they say “it’s really fast”, the probably actually mean “it’s never really slow”.

That’s why memory leaks are so bad — because they lead to paging, which utterly destroys performance, probably more than anything else.

It also makes me think that the single most important metric when considering browser performance is page fault counts.  Hmm, I think it’s time to look again at Julian Seward’s VM profiler and the Linux perf tools.

 

Categories
Firefox Memory consumption

Leak reports mini-triage, May 30, 2011

I just created bug 660577 which consolidated five bug reports, all of which were complaining about Firefox 4 having high memory usage and/or OOM aborts on image-heavy pages.  This is a clear regression from Firefox 3.6, and appears to have two likely causes:

  • The introduction of infallible new/new[] means Firefox 4 sometimes aborts where Firefox 3.6 would try to recover.  (Kyle Huey already opened bug 660580 to fix this.  Thanks, Kyle!)
  • image.mem.min_discard_timeout_ms was increased from 10,000 (10 seconds) to 120,000 (120 seconds).  This means that Firefox holds on to some image data (I don’t understand the exact details) for longer.

Input from people who know the details of this stuff would be most welcome!  Thanks.