Categories
Firefox Memory consumption MemShrink

MemShrink progress, week 3

This was the final week of development for Firefox 7, and lots of exciting MemShrink stuff happened.

Per-compartment memory reporters

I landed per-compartment memory reporters, using the new memory multi-reporter interface.  I’ve written previously about per-compartment reporters;  the number of things measured has increased since then, and each compartment now looks like this:

Output of a per-compartment memory reporter

One nice thing about this feature is that it gives technically-oriented users a way to tell which web sites are causing high memory usage.  This may help with perception, too;  people might think “geez, Facebook is using a lot of memory” instead of “geez, Firefox is using a lot of memory”.

Another nice thing is that it can help find leaks.  If you close all your tabs and some compartments are still around, that’s a leak, right?  Actually, it’s more complicated than that:  compartments are destroyed by the garbage collector, so there can be a delay.  But there are buttons at the bottom of about:memory which force GC to occur, so if you press them and some compartments are still around, that’s a leak right?  Well, it appears that sites that use timers can stick around while the timers are still live.  For example, TBPL appears to have a refresh timer that is 2 or 3 minutes long, so it’s compartment stays alive for that long after the page is closed.  (Actually, it’s not entirely clear if or why that should happen.)  However, once you’ve accounted for these complications, any remaining compartments are likely to have leaked.

Another thing to note is that the JS heap space used by each compartment is now reported.  Also, the fraction of the JS heap that isn’t currently used by any particular compartment is also reported:

Output of non-compartment GC memory reporters

Once this change landed, it didn’t take long to see that the “gc-heap-chunk-unused” number could get very large.  In particular, if you browsed for a while, then closed all tabs except about:memory, then triggered heap minimization (via the buttons at the bottom of about:memory), you’d often see very large “gc-heap-chunk-unused” numbers.  It turns out this is caused by fragmentation.  The JS heap is broken into 1MB chunks, and often you’d end up with with a lot of almost-empty chunks, but they’d have a small number of long-lived objects in them keeping them alive, and thus preventing them from being deallocated.

Reduced JavaScript heap fragmentation

Fortunately, Gregor Wagner had already anticipated this, and he tightened his grasp on the 2011 MemShrink MVP award by greatly reducing this fragmentation in the JavaScript heap.  Parts of Firefox itself are implemented in JavaScript, and many objects belonging to Firefox itself (more specifically, objects in the “system principal” and “atoms” compartments) are long-lived.  So Gregor added simple chunk segregation;  objects belonging to Firefox get put in one group of chunks, and all other objects (i.e. those from websites) get put in a second group of chunks.  This made a huge difference.  See this comment and subsequent comments for some detailed measurements of a short browsing session;  in short, the size of the heap was over 5x smaller (21MB vs. 108MB) after closing a number of tabs and forcing garbage collection.  Even if you don’t force garbage collection, it still helps greatly, because garbage collection happens periodically anyway, and longer browsing sessions will benefit more than shorter sessions.

This change will help everyday browsing a lot.  It will also help with the perception of Firefox’s memory usage — once you learn about about:memory, an obvious thing to try is to browse for a while, close all tabs, and see what the memory usage looks like.  Prior to this patch, the memory usage was often much higher than it is on start-up.  With this patch, the usage is much closer to the usage seen at start-up.  Ideally, it would be the same, and Justin Lebar opened a bug to track progress on exactly this issue.

There’s still room for reducing fragmentation in the JavaScript heap further.  Kyle Huey has some good ideas in this bug.

Other bits and pieces

  • Chris Pearce finished a major refactoring of media code.  Previously, every media (<audio> and <video>) element in a page required three threads (four on Linux).  And each thread has a stack.  On Linux, the stack is 8MB, on Windows it’s 1MB, on Mac it’s 64KB.  (Edit: Robert O’Callahan pointed out in the comments that this space is not necessarily committed, which means they’re not really costing anything other than virtual memory space.)  Webpages can have dozens, even hundreds of media elements, so these stacks can really add up.  Chris changed things so that each media element has a single thread.  This really helps on complicated web sites/apps like The Wilderness Downtown, ROME, and WebGL Quake.  (There is also a bug open to reduce the thread stack size on Linux;  8MB is excessive.)  Chris only just landed these patches on mozilla-inbound, he deliberately waited until after the FF7 deadline to maximize testing time.  Assuming there are no problems, this improvement will be in FF8.
  • Jesse Ruderman tweaked some of his fuzzing tools to look for leaks and found four new ones.
  • Mounir Lamouri landed some memory reporters for the DOM.  They’re currently disabled while he extends them and makes that they’re reporting sensible numbers.
  • I created mlk-fx8, a bug for tracking leaks and quasi-leaks reported against Firefox 8.

Quantifying progress

I’m going to attempt to quantify MemShrink’s progress in these weekly reports.  It’s not obvious what’s the best way to do this.  The MemShrink wiki page discusses this a bit.  I’m going to do something crude:  just count the number of bugs annotated with “MemShrink”, and track this each week.  Here are the current counts:

  • P1: 18
  • P2: 44
  • P3: 25
  • Unprioritized: 2

Obviously, this is a flawed way to measure progress;  for example, if we improve our leak-detection efforts, which is a good thing, we’ll get more bug reports.  But hopefully these numbers will trend downwards in the long term.

I’m also wondering if my choice to have three priority levels was a mistake, if only for the reason that people like to pick the middle item, so P2 will always dominate.  If we had four priority levels, we’d be forced to prioritize all those middling bugs up or down.

The minutes of today’s meeting are available here.  Thanks to Jesse for taking them.

Categories
Memory consumption MemShrink

MemShrink progress, week 2

Here are some potted highlights from the second week of MemShrink.

  • The beauty of Gregor Wagner’s time-based GC trigger, which I mentioned last week, became ever evident.  It’s been confirmed that nine other bug reports are fixed by this one change.  Great stuff!  See this comment and subsequent comments for an analysis.  Unfortunately, the release drivers decided it wasn’t suitable for back-porting to Firefox 6.  Users who leave their browser open overnight should find Firefox 7 more to their liking, once it’s available.
  • I landed lazy initialization of TraceMonitors and JaegerCompartments, something I wrote about previously.  This gave a 2.89% reduction in the Trace Malloc MaxHeap measurement on CentOS (x86_64) release 5, and probably similar reductions on other platforms.
  • Justin Lebar added hard and soft page fault counts to about:memory.  This makes it clear if Firefox is paging badly.
  • Dão Gottwald fixed a leak in the iQ library used by Panorama.  This bug could cause entire tabs to be leaked in some circumstances.
  • Blake Kaplan avoided the creation of some sandboxes.  This seems to have reduced the number of JavaScript compartments required for techcrunch.com.

There’s less than a week until the development period for Firefox 7 ends.  Hopefully we’ll see some more good MemShrink fixes before that happens.

Categories
MemShrink

MemShrink progress, week 1

We had our first MemShrink meeting a week ago.  Here’s some progress that’s been made since then.  Please note that this is a list of things that caught my eye, and isn’t meant to be exhaustive in any way.

Two big improvements were made to garbage collection trigger heuristics.

  • Gregor Wagner added a time-based trigger to the garbage collector.  Most notably, this should prevent slow build-ups of garbage when the browser is left idle but an open website is doing a small amount of continual work (eg. updates on a news site).  We’ve had a lot of reports about this problem, so this will hopefully be a big win. Thanks, Gregor!  Note that if the browser is truly idle it won’t trigger, so it shouldn’t cause power problems with Firefox mobile.  See this comment for full details of the new trigger.
  • Luke Wagner (no relation to Gregor!) fixed a gap in the garbage collector’s allocation-based triggers — certain string allocations weren’t being counted by the garbage collector, which meant it allowed them to build up excessively.  This fix particularly helps with complex regular expression operations.

Two improvements were made to the coverage of about:memory, thus reducing the amount of memory falling into the “heap-unclassified” bucket.

  • I added three new memory reporters: explicit/js/scripts, explicit/js/object-slots, explicit/js/string-chars.  When I open Gmail, these three together account for 11% of the explicit allocations.
  • Kyle Huey added a new memory reporter called xpti-working-set, which measures memory used by the XPCOM typelib system.  This is usually a bit over 1MB of memory.

Justin Lebar made progress on measuring page fault counts and using them to respond when memory pressure is high.

Finally, I made progress on reducing unnecessary compartment overhead, which I described previously.

Categories
MemShrink

MemShrink meetings are go: Tuesday 1pm (Pacific time)

Exactly three months ago I wrote about a new project called MemShrink, the aim of which was to reduce Firefox’s memory usage.  Thanks to Johnny Stenback we will now have weekly MemShrink meetings in which bug reports will be discussed, triaged and assigned, and anything else relevant (see this wiki page) will be discussed.  These meetings will take place every week at Tuesday 1pm, Pacific time;  the first one will be on June 14.

This is great news!  MemShrink was never going to be more than a niche effort without weekly meetings.  To give you an idea of how important I think they are, I’ll be attending them even though it’ll be 6am Wednesday in my timezone.  So come along or dial-in.  Johnny will post dial-in instructions to the dev-platform list/newsgroup some time before the meeting.

Categories
Bugzilla Firefox Memory consumption MemShrink

The new leak tracking bugs are live

Yesterday I proposed a new way of tracking leak reports.  It’s now up and running.  Two old tracking bugs have been decommissioned: bug 632234 (which was already resolved) and bug 640452.  Five new bugs have been created:

  • Bug 659855 – (mlk-fx4-beta) [meta] Leaks and quasi-leaks reported against Firefox 4 betas
  • Bug 659856 – (mlk-fx4) [meta] Leaks and quasi-leaks reported against Firefox 4
  • Bug 659857 – (mlk-fx5) [meta] Leaks and quasi-leaks reported against Firefox 5
  • Bug 659858 – (mlk-fx6) [meta] Leaks and quasi-leaks reported against Firefox 6
  • Bug 659860 – (mlk-fx7) [meta] Leaks and quasi-leaks reported against Firefox 7

Please CC yourself if you are interested.  Apologies for any bugspam you received as a result of these changes.  Hopefully this new tracking system will work well.

 

 

Categories
Bugzilla Firefox Memory consumption MemShrink

A new way of tracking leak reports

We get lots of leak reports from users.  There is a spectrum of quality.

  • Some are hopelessly vague and will never lead to anything useful. (“After browsing for several hours, Firefox is using 100s of MBs of memory.  This is unacceptable;  please fix.”)  Bug 643177 is an example.
  • Some are very precise.  This makes them easy to reproduce, likely to be fixed quickly, and easy to re-confirm if other leaks are fixed in the interim.  (“I managed to reduce the problem down to the attached 10 line HTML file, it causes my machine to run out of memory within 10 seconds of loading.”) Bug 654106 is a good example.
  • Most are somewhere between these two extremes.

Because many of the reports aren’t great, it can be hard to tell if the problem is still present some time later. A single leak may be reported N times, then fixed, and N-1 reports stay open.  In short, leak reports get stale.  (This is true of many bug reports, but I think leak reports are more prone to staleness than most.)

How bugs are currently tracked

There is a keyword, ‘mlk’, which is added to almost all leak reports.  There are over 600 open bugs with that keyword, going back over 10 years.  So it’s not much use.

In the lead-up to Firefox 4, I used bug 632234 (which I’ll henceforth call “mlk-fx4-old”) to track potentially blocking leaks.  It worked well.

After that, I created bug 640452 (which I’ll henceforth call “mlk-fx5+”), with which I’ve been tracking leaks in the lead-up to Firefox 5 and later versions.  I carried over unresolved bugs from mlk-fx4-old.  mlk-fx5+ is starting to fill up feel stale.  Basically, I can see it suffering the same problems as the ‘mlk’ keyword before too long.

So I’m thinking about changing how these are tracked.  The basic idea is to use keep using the ‘mlk’ keyword for all leak reports, and then have one leak-tracking bug for each version of Firefox, so it’s clear which version each report applies to.

Steps needed to start this

Add the ‘mlk’ keyword to all mlk-fx4-old and mlk-fx5+ bugs that lack it.

Open new tracking bugs: mlk-fx4-beta, mlk-fx4, mlk-fx5, mlk-fx6.  (The Firefox 4 beta period was long enough, and there were enough leak reports filed against beta versions, that separating mlk-fx4-beta and mlk-fx4 seems worthwhile.)  Make each mlk-fxN depend on mlk-fx(N-1).

For all the existing bugs tracked by mlk-fx4-old and mlk-fx5+, add them to the appropriate new tracking bug.  With one exception: for hopelessly vague ones, just mark them as duplicates of mlk-fxN, with an explanatory message (“we’re not ignoring leaks, look at all these ones we’re tracking!  but your report doesn’t tell us anything we don’t already know, sorry”).

Close mlk-fx5+.

Steps needed to maintain this in the future

When Firefox version N’s cycle starts, open mlk-fxN, and mark it as depending on mlk-fx(N-1).

For all new leak reports, mark it as blocking mlk-fxN, for appropriate N.  Also add the ‘mlk’ keyword.

If someone confirms in a comment that a problem reported in version N is still present in version N+1, mark that bug as also blocking mlk-fx(N+1).

Properties of this system

You can still search for all leak reports, based on the ‘mlk’ keyword.

You can immediately tell roughly how stale a report is likely to be, based on which mlk-fxN tracking bug it blocks.  This is more reliable than the bug number or file date;  for example, we are still getting reports against Firefox 4 even though Firefox 5 (which has fixed a number of leaks) is in beta and Firefox 6 just went to Aurora.  This immediately gives a starting priority for all leak reports:  more recent ones have higher priority because they’re more likely to still be unfixed.

Hopelessly vague reports are resolved immediately by duplicating, so they don’t clog things up.

Tracking bugs shouldn’t get too big and unwieldy, because each Firefox version has a limited lifespan.

Reports against version N still block mlk-fx(N+1), but via one level of indirection.  Reports against version N+2 still block mlk-fx(N+2), but via two levels of indirection, etc.  So the full chain of dependencies is maintained.

We could periodically go through older bugs (eg. 3 releases ago) and ask people to re-confirm, and close out ones that get no response.  But we wouldn’t have to do that.

Am I crazy?

Is this bureaucratic overkill?  I don’t think so.  It’ll take some work, but I’m happy to do that.  It’ll only take an hour or two to set up, and then it won’t be much harder to maintain than what I’m currently doing with the mlk-fx5+ bug.  (I also have plans for writing instructions to help users file better leak reports.)  And it’ll allow us to proceed much more usefully with the lists of leak reports that we have.

But I’m interested to hear if you disagree, or have any ideas for improving it.  Thanks!

Categories
about:memory Firefox Memory consumption MemShrink

Leak reports triage, May 24, 2011

I’ve been tracking recent memory leak reports in bug 640452. I’m doing this because I think memory leaks hurt Firefox, in terms of losing users to other browsers, as much as any other single cause. (I suspect pathological performance slow-downs due to old and busted profiles hurt almost as much, but that’s a topic for another day.)

There are 61 bugs tracked by bug 640452, 21 of which have been resolved.  Any and all help with the 40 remaining would be most welcome. For each bug I’ve put in square brackets a summary of the action I think it needs.

  • [NEEDS ANALYSIS]:  needs someone to attempt to reproduce, try to work out if the problem is real and still occurring.
  • [NEEDS WORK]: problem is known to be real, needs someone to actually fix it.
  • [PENDING EVANGELISM]: problem with a website is causing a leak, needs someone to check the site has been fixed.
  • [CLOSE?]: bug report is unlikely to go anywhere useful.  Closing it (with a gentle explanation) is probably the best thing to do.
  • [GGC]: needs generation GC to be fixed properly.

Here are the bugs.

  • 497808: This is a leak in gmail, caused by a bug in gmail — when an email editing widget is dismissed, some stuff isn’t unlinked from a global object that should be.  Google Chrome also leaks, but a smaller amount, it’s unclear why. The bug is assigned to Peterv and is still open pending confirmation that it’s been fixed in gmail. [PENDING EVANGELISM]
  • 573688: Valgrind detects several basic leaks in SQLite.  Assigned to Sayre, no progress yet. [NEEDS WORK]
  • 616850: Huge heaps encountered when browsing www.pixiv.net, leading to incredibly slow cycle collections (3 minutes or more!)  Little progress. [NEEDS ANALYSIS]
  • 617569: Large heaps encountered for some pages using web workers.  Looks like it’s not an actual leak.  Assigned to Gal, he says a generational GC would help enormously, so probably nothing will happen with this bug until that is implemented (which is planned).  I marked it as depending on bug 619558. [GGC]
  • 624186: Using arguments.callee.caller from a JSM can trigger an xpcom leak.  The bug has a nice small test that demonstrates the problem.  Unassigned. [NEEDS WORK]
  • 631536: A bad string leak, seemingly on Windows only, with lots of discussion and a small test case.  Assigned to Honza Bambas.  Was a Firefox 4.0 blocker that was changed to a softblocker at the last minute without any explanation.  Seems close to being fixed. [NEEDS WORK]
  • 632012: Firefox 4 with browser.sessionstore.max_concurrent_tabs=0 uses a lot more memory restoring a session with 100s of tabs than Firefox 3.6 with BarTab.  Unassigned.  Unclear if this is a valid comparison.  [CLOSE?]
  • 634156: Identifies some places where the code could avoid creating sandboxes.  Assigned to Mrbkap, he said (only four days ago) he has a patch in progress.  Seems like it’s not actually a leak, so I changed it to block bug 640457 (mslim-fx5+). [NEEDS WORK]
  • 634449: A classic not-very-useful leak report.  One user reported high and increasing memory usage with vague steps to reproduce.  Two other users piled on with more vague complaints.  The original reporter didn’t respond to requests for more measurements with a later version.  I’m really tempted to close bugs like this, they’ll never lead anywhere.  Unassigned. [CLOSE?]
  • 634895: Vague report of memory usage increasing after awakening a machine after hibernation, with one “me too” report.  Unassigned.  Unlikely to lead to any useful changes.  [CLOSE?]
  • 635121: A leak in Facebook Chat, apparently it’s Facebook’s fault and occurs in other browsers too.  (Unfortunately, leaks like that hurt us disproportionately because we don’t have process separation.)  Assigned to Rob Arnold, marked as a Tech Evangelism bug.  Unclear if the Facebook code has been fixed, or if Facebook has even been contacted. [PENDING EVANGELISM]
  • 635620: Very vague report.  Unlikely to go anywhere.  Unassigned. [CLOSE?]
  • 636077: Report of increasing memory usage, with good test case.  Lots of discussion, but unclear outcomes.  Again, generational GC could help.  MozMill endurance tests showed the memory increase flattening out eventually.  Might be worth re-measuring now.  Assigned to Gal.  I marked it as depending on the generational GC bug (bug 619558). [NEEDS ANALYSIS, GGC]
  • 636220: Memory usage remains high after closing Google Docs tabs.  Assigned to Gal.  Needs more attempts to reproduce. [NEEDS ANALYSIS]
  • 637449: Looks like a clear WebGL leak.  Might be a duplicate of, or related to, bug 651695. Unassigned, but Bjacob looked into it a bit. [NEEDS ANALYSIS]
  • 637782: Memory usage increases on image-heavy sites like http://www.pixdaus.com/ or http://boston.com/bigpicture/ or http://www.theatlantic.com/infocus/.  Lots of discussion but not much progress.  Unclear if the memory is being released eventually.  Needs more analysis.  Unassigned.  [NEEDS ANALYSIS]
  • 638238: Report of memory increasing greatly while Firefox is minimized.  Might be related to RSS Ticker?  I would recommend giving up on this one except the reporter is extremely helpful (he’s participated in multiple bugs and I’ve chatted to him on IRC) and so progress might still be made with some effort.  Unassigned. [NEEDS ANALYSIS]
  • 639186: AdBlock Plus and NoScript together causing a leak on a specific page.  Lots of discussion but it petered out.  Unassigned.  [NEEDS ANALYSIS]
  • 639515: GreaseMonkey causes a big memory spike when entering private browsing.  Some discussion that went nowhere.  Unassigned.  [NEEDS ANALYSIS]
  • 639780: Report of steadily increasing memory usage leading to OOMs.  Steps to reproduce are vague, but the reporter is very helpful and collected lots of data.  Unassigned.
  • 640923: Vague reports of increasing memory usage, lots of people have piled on.  One useful lead:  RSS feeds might be causing problems on Windows 7?  The user named SineSwiper (who has alternated between being abusive and collecting useful data) thinks so.  Unassigned. [NEEDS ANALYSIS]
  • 642472: High memory usage on a mapping site.  Very detailed steps to reproduce;  one other user couldn’t reproduce.  Unassigned.  [NEEDS ANALYSIS]
  • 643177: Vague report.  Unassigned.  [CLOSE?]
  • 643940: Ehsan found leaks in the HTML5 parser with the OS X ‘leaks’ tool.  Unassigned.  [NEEDS ANALYSIS]
  • 644073: Ehsan found a shader leak with the OS X ‘leaks’ tool.  Unassigned. [NEEDS WORK]
  • 644457: High memory usage with gawker websites and add-ons (maybe NoScript?)  See comment 25. Unassigned.  [CLOSE?]
  • 644876: Leak with AdBlock Plus and PageSpeed add-ons on mapcrunch.com.  Unassigned.  [NEEDS ANALYSIS]
  • 645633: High memory usage with somewhat detailed steps to reproduce.  Reporter is helpful and has collected various pieces of data.  [NEEDS ANALYSIS]
  • 646575: Creating sandboxes causes leaks.  Good test case.  Unassigned.  [NEEDS WORK]
  • 650350: Problem with image element being held onto when image data has been released.  Bz said he would look at it.  Unassigned. [NEEDS ANALYSIS]
  • 650649: with only about:blank loaded, memory usage ticks up slightly.  Some discussion;  it may be due to the Urlclassifier downloading things.  If that’s true, it makes diagnosing leaks difficult. [NEEDS ANALYSIS]
  • 651695: Huge WebGL leak in the CubicVR demo.  Unassigned.  [NEEDS WORK]
  • 653817: Memory increase after opening and closing tabs.  A lot of discussion has happened, it’s unclear if the memory usage is due to legitimate things or if it’s an actual leak.  Assigned to me.  [NEEDS ANALYSIS]
  • 653970: High memory usage on an image-heavy site.  Comment 5 has a JS snippet that supposedly causes OOM crashes very quickly.  Unassigned.  [NEEDS ANALYSIS]
  • 654028: High memory usage on Slashdot.  Seems to be because Slashdot runs heaps of JavaScript when you type a comment.  Lots of discussion, seems to be due to bad GC heuristics and/or lack of generational GC?  Unclear if there’s an actual leak, or just delayed GC.  Unassigned.  [NEEDS ANALYSIS]
  • 654820: Leak in JaegerMonkey’s regular expression code generator caught by assertions.  Assigned to cdleary.  [NEEDS WORK]
  • 655227: Timers using small intervals (100ms or less) are never garbage collected(!)  Unassigned.  [NEEDS ANALYSIS]
  • 656120: Bug to do GC periodically when the browser is idle.  Assigned to Gwagner, has a patch.  [NEEDS WORK]
  • 657658: test_prompt.html leaks.  Unassigned.  [NEEDS WORK]

You can see that most bugs are marked as “[NEEDS ANALYSIS]”.  The size of the problem and the amount of developer attention it is receiving are not in proportion.

One thing I want to do is write a wiki page explaining how to submit a useful leak report, in an attempt to avoid the vague reports that never go anywhere.  But the improved about:memory is a big part of that, and it won’t land until Firefox 6.  I’m wondering if the about:memory changes should be backported to Firefox 5 in an attempt to improve our leak reports ASAP.

Another thing I’m wondering about is being more aggressive about closing old leak reports.  We have 624 open bugs that have the “mlk” keyword (including some recent ones that aren’t in the list above).  The oldest of these is bug 39323 which was filed in May 2000.  Surely most of these aren’t relevant any more?  It’s good to have a mechanism for tracking leaks (be it a keyword or a tracking bug) but if most such bugs are never closed, the mechanism ends up being useless.  I’d love to hear ideas about this.

Finally, I’d like to hear if people think this blog post is useful;  I’m happy to make it an ongoing series if so, though regular MemShrink meetings would be more effective.

 

Categories
Firefox Memory consumption MemShrink

Another leak fixed

Bug 654106 was just fixed by Henri Sivonen.  The leak was somewhere, in the HTML5 parser;  a small amount of memory would leak any time .innerHTML was set on an element.  Thanks to Henri for fixing it, to Hughmann for filing a wonderfully precise and easy-to-reproduce bug report, and to Boris Zbarsky and Mike Hommey for helping with the diagnosis.

With that bug fixed, that leaves 34 unresolved bugs hanging off bug 640452, the tracking bug for memory leaks.

Categories
Firefox Memory consumption MemShrink

MemShrink

Memory consumption is really important in a web browser.  Firefox has some room for improvement on that front, and so Jeff Muizelaar and I are working to start up an effort, called “MemShrink”, to reduce memory consumption in Firefox 5 (and beyond).

We’ve started a wiki page outlining some ideas on ways to improve our tracking of memory consumption.  Please read it and comment.

I’ve also opened bug 640452, which is a tracking bug for memory leaks in Firefox 5, and bug 640457, which is a tracking bug for other memory improvements in Firefox 5.  Please CC yourself if you’re interested.

Update: I just added bug 640791, which is a tracking bug for improvements to memory profiling.