Categories
Firefox Memory consumption MemShrink

The benefits of reducing memory consumption

TL;DR: Any single change that reduces Firefox’s memory consumption can affect Firefox’s speed, stability and reputation in a variety of ways, some of which are non-obvious.  Some examples illustrate this.

The MemShrink wiki page starts with the following text.

MemShrink is a project that aims to reduce Firefox’s memory consumption. There are three potential benefits.  Speed. […] Stability. […] Reputation.”

I want to dig more deeply into these benefits and the question of what it means to “reduce Firefox’s memory consumption”, because there are some subtleties involved.  In what follows I will use the term “MemShrink optimization” to refer to any change that reduces Firefox’s memory consumption.

Speed

People tend to associate low memory consumption with speed.  However, time/space trade-offs abound in programming, and an explicit goal of MemShrink is to not slow Firefox down — the wiki page says:

Changes that reduce memory consumption but make Firefox slower are not desirable.

There are several ways that MemShrink optimizations can improve performance.

Paging

The case that people probably think of first is paging.  If physical memory fills up and the machine needs to start paging, i.e. evicting virtual memory pages to disk space, it can be catastrophic for performance.  This is because disk accesses are many thousands of times slower than RAM accesses.

However, some MemShrink optimizations are far more likely to affect paging than others.  The key idea here is that of the working set size — what’s important is not the total amount of physical or virtual memory being used, but the fraction of that memory that is touched frequently.  For example, consider two programs that allocate and use a 1GB array.  The first one touches pages within the array at random.  The second one touches every page once and then touches the first page many times.  The second program will obviously page much less than the first if the system’s physical memory fills up.

The consequence of this is that a change that reduces the size of data structures that are accessed frequently is much more likely to reduce paging than a change that reduces the size of data structures that are accessed rarely.  Note that this is counter-intuitive!  It’s natural to want to optimize data structures that are wasteful of space, but “wasteful of space” often means “hardly touched” and so such optimizations don’t have much effect on paging.

Measuring the working set size of a complex program like a web browser is actually rather difficult, which means that gauging the impact of a change on paging is also difficult.  Julian Seward’s virtual memory profiler offers one possible way.  Another complication is that results vary greatly between machines.  If you are running Firefox on a machine with 16GB of RAM, it’s likely that no change will affect paging, because Firefox is probably never paging in the first place.  If you are on a netbook with 1GB of RAM, the story is obviously different.  Also, the effects can vary between different operating systems.

Cache pressure

Some MemShrink optimizations can also reduce cache pressure.  For example, a change that makes a struct smaller would allow more of them to fit into a cache line.  Like paging, these effects are very difficult to quantify, and changes that affect hot structures are more likely to reduce cache pressure significantly and improve performance.

Structure traversals

Sometimes large data structures must be traversed, and reducing the number of elements in the data structure can reduce that traversal time.  The obvious case for Firefox is the JavaScript heap — the garbage collector and cycle collector frequently traverse it, and so any change that causes dead objects to accumulate more slowly will reduce their traversal times.

Only a small fraction of MemShrink optimizations will speed up structure traversals.

Stability

If Firefox (or any program) uses too much memory, it can lead to aborts and crashes.  These are sometimes called “OOMs” (out of memory). There are two main kinds of OOM:  those involving virtual memory, and those involving physical memory.

Virtual OOMs

A “virtual OOM” occurs when the virtual address space fills up and Firefox simply cannot refer to any more memory.  This is mostly a problem on Windows, where Firefox is distributed as a 32-bit application, and so it can only address 2GB or 4GB of memory (the addressable amount depends on the OS configuration).  This is true even if you have more than 4GB of RAM.  In contrast, Mac OS X and Linux builds of Firefox are 64-bit and so virtual memory exhaustion is essentially impossible because the address space is massively larger.

(I don’t want to get distracted by the question of why Firefox is a 32-bit application on Windows.  I’ll just mention that (a) many Windows users are still running 32-bit versions of Windows that cannot run 64-bit applications, and (b) Mozilla does 64-bit Windows builds for testing purposes.  Detailed discussions of the pros and cons of 64-bit builds can be read here and here.)

The vast majority of MemShrink optimizations will reduce the amount of virtual memory consumed.  (The only counter-examples I can think of involve deliberately evicting pages from physical memory.  E.g. see the example of the GC decommitting change discussed below.)  And any such change will obviously reduce the number of virtual OOMs.  Furthermore, the effect of any reduction is obvious and straightforward — a change that reduces the virtual memory consumption by 100MB on a particular machine and workload is twice as good as one that reduces it by 50MB.  Of course, any improvement will only be noticed by those who experience virtual OOMs, which typically is people who have 100s of tabs open at once.  (It may come as a surprise, but some people have that many tabs open regularly.)

Physical OOMs

A “physical OOM” occurs when physical memory (and any additional backing storage such as swap space on disk) fills up.  This is mostly a problem on low-end devices such as smartphones and netbooks, which typically have small amounts of RAM and may not have any swap space.

The situation for physical memory is similar to that for virtual memory:  almost any MemShrink optimization will reduce Firefox’s physical memory consumption.  (One exception is that it’s possible for a memory allocation to consume virtual memory but not physical memory if it’s never accessed;  more about this in the examples section below.)  And any reduction in physical memory consumption will in turn reduce the number of physical OOMs.  Finally, the effects are again obvious and straightforward — a 100MB reduction is twice as good as a 50MB reduction.

Reputation

Finally, we have reputation.  The obvious effect here is that if MemShrink optimizations cause Firefox to become faster and more stable over time, people’s opinion of Firefox will rise, either because their own experience improves, or they hear that other people’s experience improves.

But I want to highlight a less obvious aspect of reputation.  People often gauge Firefox’s memory consumption by looking at a utility such as the Task Manager (on Windows) or ‘top’ (on Mac/Linux).  Interpreting the numbers from these utilities is rather difficult — there are multiple metrics and all sorts of subtleties involved.  (See this Stack Overflow post for evidence of the complexities and how easy it is to get things wrong.)  In fact, in my opinion, the subtleties are so great that people should almost never look at these numbers and instead focus on metrics that are influenced by memory consumption but which they can observe directly as users, i.e. speed and crash rate… but that’s a discussion for another time.

Nonetheless, a non-trivial number of people judge Firefox on this metric.  Imagine a change that caused Firefox’s numbers in these utilities to drop but had no other observable effect.  (Such a change may be impossible in practice, but that doesn’t matter in this thought experiment.)  One thing that has consistently surprised me is that some people view memory consumption as something approaching a moral issue:  low memory consumption is virtuous and high memory consumption is sinful.  As a result, this hypothetical change would improve Firefox’s reputation, rightly or wrongly, for the better.

Let’s call this aspect of Firefox’s reputation the “reputation-by-measurement”.  I suspect the most important metric for reputation-by-measurement is the “private bytes” reported by the Windows Task Manager, because that’s what people seem to most often look at.  Private bytes measures the virtual memory of a process that is not shared with any other process.  It’s my educated guess that in Firefox’s case that the amount of shared memory isn’t that high, and so the situation is similar to virtual OOMs above — just about any change that reduces the amount of virtual memory will reduce the private bytes by the same amount, and in terms of reputation-by-measurement, a 100MB reduction is twice as good as a 50MB reduction.

Examples

Some examples help bring this discussion together.  Consider bug 609905, which removed a 512KB block of memory that was found to be allocated but never accessed.  (This occurred because some code that used that block was removed but the allocation wasn’t removed at the same time.)  What were the benefits of this change?

  • The 512KB never would have been in the working set, so performance would not have been affected.
  • Virtual memory consumption would have dropped by 512KB, slightly reducing the likelihood of virtual OOMs.
  • Physical memory consumption probably didn’t change — because the block was never accessed, it probably never made it into physical memory.
  • Private bytes would have dropped by 512KB, slightly improving reputation-by-measurement.

Now consider bug 670596, which made the JavaScript garbage collector decommit (i.e. remove from physical memory and backing storage) 1MB heap chunks that are unused.  What were the benefits of this change?

  • Performance may have improved slightly due to reduced paging, on machines where paging happens.  Those chunks are clearly not in the working set when they are decommitted, but if paging occurs, the pre-emptive removal of some pages from physical memory may prevent the OS from having to evict some other pages, some of which might have been in the working set.
  • Virtual memory consumption would not have changed at all, because decommitted memory still takes up address space.
  • Physical memory consumption would have dropped by the full decommit amount — 10s or even 100s of MBs in many cases when decommitting is triggered — significantly reducing the likelihood of physical OOMs.
  • Private bytes would not have changed, leaving reputation-by-measurement unaffected. [Update: Justin Lebar queried this. This page indicates that decommitting memory does reduce private bytes, which means that this change would have improved reputation-by-measurement.]

Another interesting one is bug 676457.  It fixed a problem where PLArenaPool was requesting lots of allocations of ~4,128 bytes.  jemalloc rounded these requests up to 8,192 bytes, so almost 50% of each 8,192 byte block was wasted, and there could be many of these blocks.  The patch fixed this by reducing the requests to 4,096 bytes which is a power-of-two and not rounded up (and also usually the size of an OS virtual memory page).  What were the benefits of this change?

  • Performance may have improved due to less paging, because the working set size may have dropped.  The size of the effect depends how often the final 32 used bytes of each chunk — those that spilled onto a second page — are accessed.  For at least some of the blocks those 32 bytes would never be touched.
  • Virtual memory consumption dropped significantly, reducing the likelihood of virtual OOMs.
  • Physical memory consumption may have dropped, but it’s not clear by how much.  In cases where the extra 32 bytes are never accessed, the second page might not have ever taken up physical memory.
  • Private bytes would have dropped by the same amount as virtual memory, improving reputation-by-measurement.

Finally, let’s think about a compacting, generational garbage collector, something that is being worked on by the JS team at the moment.

  • Performance improves for three reasons.  First, paging is reduced because of the generational behaviour:  much of the JS engine activity occurs in the nursery, which is small;  in other words, the memory activity is concentrated within a smaller part of the working set.  Second, paging is further reduced because of the compaction: this reduces fragmentation within pages in the tenured heap, reducing the total working set size.  Third, the tenured heap grows more slowly because of the generational behaviour: many objects are collected earlier (in the nursery) than they would be with a non-generational collector, which means that structure traversals done by the garbage collector (during full-heap collections) and cycle collector are faster.
  • Virtual memory consumption drops in two ways.  First, the compaction minimizes waste due to fragmentation.  Second, the heap grows more slowly.
  • Physical memory consumption drops for the same two reasons.
  • Private bytes also drops for the same two reasons.

A virtuous change indeed.

Conclusion

Reducing Firefox’s memory consumption is a good thing, and it has the following benefits.

  • It can improve speed, due to less paging, fewer cache misses, and faster structure traversals.  These changes are likely to be noticed more by users on lower-end machines.
  • It improves stability by reducing virtual OOM aborts, which mostly helps heavy tab users on Windows.  It also improves stability by reducing physical OOM aborts, which mostly affects heavy-ish tab users on small devices like smartphones and netbooks.
  • It improves reputation among those whose browsing experience is improved by the above changes, and also among users who make judgments according to memory measurements with utilities like the Windows Task Manager.

Furthermore, when discussing MemShrink optimizations, it’s a good idea to describe improvements in these terms.  For example, instead of saying “this change reduces memory consumption”, one could say “this change reduces physical and virtual memory consumption and may reduce paging”.  I will endeavour to do this myself from now on.

Categories
about:memory add-ons Firefox Memory consumption MemShrink

MemShrink progress, week 33

about:memory

Up until this week, about:memory was a static pages;  once generated, it couldn’t change.  This week, I landed a patch that makes every sub-tree expandable and collapsible.    This can be quite useful, because it gives you fine control over what details to ignore.  The following screenshot shows an example where all the level 1 sub-trees in the “explicit” tree are collapsed except for “startup-cache”.  (++ indicates sub-trees that can be expanded, -- indicates sub-trees that can be collapsed, and the remaining nodes (such as “heap-unclassified”) are leaf nodes).

a mostly-collapsed "explicit" tree in about:memory

I also changed about:memory so that obviously bogus values (e.g. negative values, percentages greater than 100%) are highlighted.  These indicate bugs in memory reporters, and so we want to know about them.

Memory Reporters

I added new memory reporters for style sheets.  This was the single biggest remaining chunk of dark matter.  On a 64-bit Linux build, this reduces the “heap-unclassified” count when I have Gmail and TechCrunch open from ~23% to ~15%.  The counts mostly show up under the “explicit/dom+style” sub-tree, in “style-sheets” leaf nodes.

I also simplified the writing of memory reporters by removing the need for the computedSize argument when measuring the size of a heap block.

Finally, if you create a sandbox with Components.utils.Sandbox, you can specify a name and about:memory will use that name to identify the sandbox’s compartment.  Sander van Veen made it so that sandboxes that aren’t given a name will use the  their callers’ filename as a name.  In other words, all sandboxes should now be identifiable in about:memory.

leaks

Brian Hackett fixed a leak in JaegerMonkey.

Andrew McCreight fixed a leak caused by watchpoints, which are used when debugging JavaScript code.

A leak related to JavaScript sharps (a non-standard, SpiderMonkey-only feature) was fixed by Jeff Walden when he removed support for sharps 🙂

Add-ons

I mentioned last week that I think leaky add-ons are the #1 problem for Firefox’s memory consumption, and mentioned the idea of telling users when they have an add-on installed that is known to have performance problems.  Bug 720856 is open for this.  Asa Dotzler, Firefox Product Manager, said:

I will secure Firefox client developer resources for this feature where I have some input into resourcing. If this plan is deemed appropriate, I will work with Justin to secure AMO side resources as well and we can nail this problem.

We can’t keep going back and forth on this while our users suffer. We must act now. I understand that defining “bad add-ons” will be contentious but so long as the technical approach is righteous we can sort out how heavy handed we want to be on policy at a later time and move forward implementing this today.

There was already a feature page covering this basic idea, and Asa updated it. This feature would be a huge help to users.  Fingers crossed we’ll see some progress soon!

Henrik Skupin has started working on an add-on called MemChaser (download it here) which is aimed at helping detect problems relating to memory consumption.  Currently it just shows some stats in the add-on bar — resident memory consumption (updated every two seconds), how long the last garbage collection took, and how long the last cycle collection took — but Henrik has many ideas for the future.  Worth watching!

Bug Counts

Here are this week’s bug counts.

  • P1: 22 (-0/+2)
  • P2: 130 (-3/+2)
  • P3: 74 (-2/+2)
  • Unprioritized: 3 (-4/+3)
Categories
about:memory AdBlock Plus add-ons Firefox Garbage Collection Memory consumption MemShrink

MemShrink progress, week 32

There wasn’t much MemShrink activity this week in terms of bugs fixed, just bug 718100 and bug 720359.  So I’m going to take the opportunity this week to talk about the bigger picture.

Bug Counts

As a prelude, here are this week’s bug counts.

  • P1: 20 (-4/+0)
  • P2: 131 (-3/+3)
  • P3: 74 (-2/+7)
  • Unprioritized: 4 (-3/+4)

The drop in P1s was just due to bug re-classification;  in particular, three bugs relating to long cycle collector pauses were un-MemShrink’d because they are more about responsiveness, and they are being tracked by Project Snappy.

The Big Ticket Items

David Mandelin asked me today what where the big ticket items for MemShrink.  I’d been looking a lot at the MemShrink:P1 list recently (which is why some were re-classified) and so I was able to break it down into six main areas that cover most of the P1s and various P2s.  I’ll list these from what I think is least important to most important.

#6: Better Script Handling

Internally, a JSScript represents (more or less) the code of a JS function, including things like the internal bytecode that SpiderMonkey generates for it.  The memory used by JSScripts is measured by the “gc-heap/scripts” and “script-data” entries in about:memory.

Luke Wagner did some measurements that showed that most (70–80%) JSScripts created in the browser are never run.  In hindsight, this isn’t so surprising — many websites load libraries like jQuery but only use a fraction of the functions in those libraries.  If SpiderMonkey could be changed to generate bytecode for scripts lazily, it could reduce “script-data” memory usage by 60–70%.  This would also allow the decompiler to be removed, which would be great.

Luke also proposed sharing immutable parts of scripts between web pages.  This would avoid a lot of duplication in the case where you have many tabs open with pages from a single site.

Both of these changes potentially will make the browser faster as well, because SpiderMonkey will spend less time compiling JavaScript source code to bytecode.

No-one is assigned to work on these bugs.  The lazy script creation can be done entirely within the JS engine;  the script sharing requires assistance from Necko.  Luke is currently busy with some other righteous refactorings, but I’m quietly hoping once they’re done he might find time for one or both of these bugs.

#5: Better Memory Reporting

Before you can reduce memory consumption you have to measure it.  about:memory is the critical tool that has facilitated much of MemShrink’s work.  (For example, we never would have known about zombie compartments without it.)  It’s in pretty good shape now but there are two major improvements that can be made.

First, the “heap-unclassified” number (a.k.a “dark matter”) is still typically around 20–25%.  My goal is to reduce that to 10%. This won’t require any great new insights, we already have the tools and data required.  Rather, it’s just a matter of grinding through the list of memory reporters that need to be added and improved.

Second, the resources used by each browser tab are reported in an unwieldy fashion:  JS memory on a per-compartment basis;  layout memory on a per-docshell basis;  DOM memory on a per-window basis.  Only a few internal architectural changes stand in the way of uniting these to provide the oft-requested feature of per-tab memory reporting.  This will be great for users, because if Firefox is using more memory than they’d like, it tells them which tabs they should close in order to free up memory.

I am actively working on both these improvements, and I’m hoping that within a couple of months they’ll be mostly done.

#4: Better Memory Consumption Tracking

One thing we haven’t done well in MemShrink is to improve the state of tracking Firefox’s memory consumption.  We have plenty of anecdotes but not much hard data about the improvements we’ve made, and we don’t have good ways to detect any regressions.  A couple of ideas haven’t gone very far, but some good news is that John Schoenick is making great progress on a proper areweslimyet.com implementation.  John has demonstrated preliminary versions of the site at two MemShrink meetings and it’s looking very promising.  It uses the endurance test framework to make the measurements, and opens lots of pages from the Talos tp5 pageset.

We also hope to use telemetry data to analyze how the memory consumption of each released version of Firefox stacks up.  That analysis would come with a significant delay — weeks or months after each release — but it would be much more comprehensive than any oft-run benchmark, coming from the real-world usage patterns of thousands of users.

#3: Compacting Generational GC

If you look in about:memory, JavaScript memory usage usually dominates.  In particular, the “js-gc-heap” is usually large.  There’s also the “js-gc-heap-unused-fraction” number, often 30% or higher, which tells you how much of that space is unused because of fragmentation.  That percentage overstates things somewhat, because often a good proportion of that unused space (see “js-gc-heap-decommitted”) is decommitted, which means that it’s costing nothing but address space… but that is cold comfort if you’re suffering out-of-memory aborts on Windows due to virtual memory exhaustion.

A compacting garbage collector is one that can move objects around the heap, filling up all those little gaps that constitute fragmentation.  The JS team (especially Bill McCloskey and Terrence Cole) is implementing a compacting generational garbage collector, which is a particular kind that tends to have good performance.  In particular, many objects die young and generational collectors find these quickly, which means that the heap will grow at a significantly slower rate than it currently does.  I could be wrong, but I’m convinced this will be a big win for both memory consumption and speed.

#2: Better Foreground Tab Image Handling

Images are stored in a compressed format (e.g. JPEG, PNG, GIF) on disk.  In order to display them, a browser must decompress (a.k.a decode) the compressed form into a raw pixel form that can easily be ten times larger.  This decoded form can be discarded and regenerated as necessary, and there are trade-offs to be made — for example, if you are too aggressive in discarding decoded images, you might have to decode them again, which will take CPU cycles and the user might see flickering if the decoding occurs in the visible part of the page.

However, Firefox goes way too far in the other direction.  If you open a page in the foreground tab, every single image in that page will be immediately decoded, and none of the decoded data will be discarded unless you switch away to another tab.  For pages that contain many images, this is a recipe for horrific memory consumption, and Firefox does much worse than all the other browsers.  So this is a problem that doesn’t rear its head for all users, but it’s terrible for those that are affected.

There are three MemShrink:P1 bugs relating to this:  one about not decoding all images immediately, one about discarding non-visible decoded images after some time, and one about some infrastructure work that is required for the first two.  As far as I know, no progress has been made on these three bugs, and although two of them are assigned they are not being actively worked on.

(See this discussion on the dev-platform mailing list for more details about this topic.)

#1: Better Detection and Notification of Leaky Add-ons

It’s been the case for several months that when a user complains about Firefox consuming an excessive amount of memory, it’s usually because of one or more add-ons, and the “can you try that again in safe mode?” / “oh yeah, that fixes it” dance is getting tiresome.

Many add-ons leak.  Even popular, well-written ones:  in the past few months leaks have been found in Adblock Plus, Video DownloadHelper, GreaseMonkey and Firebug.  That’s four of the top five add-ons on AMO!  We’re now getting several reports about leaky add-ons a week;  in this week’s MemShrink meeting there were four:  TorButton, NoSquint, Customize Your Web, and 1Password.  I strongly suspect the leaks we know about are just the tip of the iceberg.

Although leaks in add-ons are not Mozilla’s fault, they are Mozilla’s problem:  Firefox gets blamed for the sins of its add-ons.  And it’s not just memory consumption;  the story is the same for performance in general.  Here’s the quote of the week, from a user of 1Password:

I only use a handful of extensions and honestly never suspected 1P, however after disabling it I noticed my FireFox performance increased very noticibly. I’ve been running for 48 hours now without the 1P extension in Firefox and wow what a difference. Browsing is faster, switching is faster, memory usage is way down.

I’ve lost count of the number of stories like this that I’ve heard.  How many users have we lost to Chrome because of these issues, I wonder?

(And it’s not just leaks.  See this analysis of 16 add-ons and their effect on memory consumption when Firefox starts.)

One small step towards improving this situation was made this week:  Jorge Villalobos and Andrew Williamson added a “check for memory leaks” item to the AMO review checklist (under “Memory leaks from content”).  And Kris Maglione added some support for this checking in his Extension Test add-on.  This means that add-ons with obvious memory leaks (and many of them are obvious if you are actively looking for them) will not be accepted by AMO.

So that will prevents leaks in some new add-ons and new versions of established add-ons.  What about existing add-ons?  One idea is that AMO could also have a flag that indicates add-ons that have known memory problems (and other performance problems).  (This flag wouldn’t be an automatic thing, it would only be set once a leak has been confirmed, and after giving the author notification and some time to fix the problem.)  So that would also improve things a bit.

But lots of add-ons aren’t hosted on AMO.  Another idea is to have a stronger mechanism, one that informs the user if they have any add-ons installed that are known to cause high memory consumption (or other bad performance problems).  There is an existing mechanism for blocking add-ons that are known to be malware or exceptionally crashy, so hopefully the warnings could piggy-back on top of that.

Then, we need a better way to detect leaky add-ons.  Currently this is entirely done manually — and a couple of excellent contributors have found leaks on multiple add-ons — but I’m hoping that it’ll be possible to do a much more thorough job by analyzing telemetry data to find out which add-ons are correlated with high memory consumption.  That information could be used to trigger manual checking.

Finally, once you know an add-on leaks, it’s not always easy to work out why.  Tools could help a lot here, if they can be made to work well.

Conclusion

I listed six big areas for improvement. If we fixed all of these I think we’d be in a fantastic position.

Three of them (#5 better memory reporting, #4 better memory consumption tracking, #3 compacting generational GC) have people working on them and are in a good state.

Three of them (#6 better script handling, #2 better foreground image tab handling, #1 better detection and notification of leaky add-ons) don’t have people working on them, as far as I know.  If you are willing and have the skills to contribute to any of these areas, please contact me!

And if you think I’ve overestimated or underestimated the importance of any issue, I’d love to hear about it.  Thanks!

Categories
Firefox Google Chrome Memory allocation Memory consumption MemShrink

MemShrink progress, week 31

Lots of good stuff happened this week in MemShrink-land.

Necko Buffer Cache

I removed the Necko buffer cache.  This cache used nsRecyclingAllocator to delay the freeing of up to 24 short-lived 32KB chunks (24 x 16KB on Mobile/B2G) in the hope that they could be reused soon.  The cache would fill up very quickly and the chunks wouldn’t be freed unless zero re-allocations occurred for 15 minutes.  This idea is to avoid malloc/free churn, but the re-allocations weren’t that frequent, and heap allocators like jemalloc tend to handle cases like this pretty well, so performance wasn’t affected noticeably.  Removing the cache has the following additional benefits.

  • nsRecyclingAllocator added a clownshoe-ish extra word to each chunk, which meant that the 32KB (or 16KB) allocations were being rounded up by jemalloc to 36KB (or 20KB), resulting in 96KB (24 x 4KB) of wasted memory.
  • This cache wasn’t covered by memory reporters, so about:memory’s “heap-unclassified” number has dropped by 864KB (24 x 36KB) on desktop and 480KB (24 x 20KB) on mobile.

I also removed the one other use of nsRecyclingAllocator in libjar, for which this recycling behaviour also had no noticeable effect.  This meant I could then remove nsRecyclingAllocator itself.  Taras Glek summarized things nicely:  “I’m happy to see placebos go away”.

Other Stuff

I gave a talk at the linux.conf.au Browser MiniConf entitled “Notes on Reducing Firefox’s Memory Consumption”, which got some attention on Slashdot.

Henrik Skupin reported that Ghostery 2.6.2 and 2.7beta2 have memory leaks (zombie compartments).  I contacted the authors today and they said they are looking into the problem, so hopefully we’ll see a fix soon.

Gian-Carlo Pascutto reworked the code that downloads the safe browsing database to use less memory, and to have a fallback if memory runs out.  This memory usage is transient and so the main benefit is that it prevents some out-of-memory crashes that were happening frequently.

Last week I mentioned bug 703427, which held up the possibility of a simple, big reduction in SQLite memory usage.  Marco Bonardo did some analysis, and unfortunately the patch caused large increases in the number of disk accesses, and so it won’t work.  A shame.

Kyle Huey fixed a zombie compartment that occurred when right-clicking on a single-line textbox.  The fun thing about this was that in only 3 hours and 35 minutes, the following events happened: the bug report was filed, the problem was confirmed by two people, the bug report was re-categorized into the appropriate component, a patch was posted, the patch was reviewed, the patch landed on mozilla-central, and the bug report was marked as fixed!  And approval for back-porting to Aurora was granted 8 hours later.  Not bad.  Kyle has also made progress on a more frequent zombie compartment caused by searching for text.

Jonathan Kew made the shaped-word caches (which are involved in text rendering) discard their data on memory pressure events.

Quote of the week

A commenter on my blog named jas said (the full comment is here):

a year ago, FF’s memory usage was about 10x what chrome was using in respect to the sites we were running…

so we have switched to chrome…

i just tested FF 9.0.1 against chrome, and it actually is running better than chrome in the memory department, which is good. but, it’s not good enough to make us switch back (running maybe 20% better in terms of memory). a tenfold difference would warrant a switch. in our instance, it was too little, too late.

but glad you are making improvements.

So that’s good, I guess?

I also like this comment from the aforementioned Slashdot thread!

Bug Counts

Here are the current bug counts.

  • P1: 24 (-3/+1)
  • P2: 131 (-8/+7)
  • P3: 69 (-1/+3)
  • Unprioritized: 3 (-3/+2)

That’s a net drop of two, largely because I went through and closed some P1 and P2 bugs that were stale or had been fixed elsewhere.

Categories
about:memory AdBlock Plus add-ons compartments DMD Firefox Garbage Collection JägerMonkey Massif Memory allocation Memory consumption MemShrink Performance SQLite Talks Valgrind

Notes on Reducing Firefox’s Memory Consumption

I gave a talk yesterday at the linux.conf.au Browser MiniConf, held in Ballarat, Australia.  Its title was “Notes On Reducing Firefox’s Memory Consumption”.

Below are the slides and notes in a SlideShare embedding. If you find that embedding problematic (some people do) you may prefer to download the PDF version directly.

Categories
about:memory AdBlock Plus add-ons compartments Firefox Memory consumption MemShrink

MemShrink progress, week 30

Add-ons

This was the week of add-ons in MemShrink-land.

Jared Wein fixed a problem in Firefox that was causing zombie compartments if you viewed a native video with any add-on installed that implements the nsIContentPolicy interface.  Examples of such add-ons are Adblock Plus, GreaseMonkey, and NoScript, which are respectively the #1, #3 and #9 most popular add-ons on AMO!  Welcome to the MemShrink club, Jared.

Speaking of GreaseMonkey, Arantius fixed a bug in it that was causing zombie compartments on some GM scripts when opening a background tabs.

I can’t tell who was responsible for the next fix, because the Add-on SDK folks use Github in a way I don’t understand.   But Myk Melez and/or Gabor Krizsanits greatly reduced the number of compartments used in JetPack-style add-ons.  In one example, the number of compartments dropped from 156 to 8, saving about 20MB of memory.  This was a MemShrink:P1 bug.

Also, as far as I can tell, the same patch also fixed bug 680821 which means that all compartments that hold sandboxes created by JetPack-style add-ons will be marked as belonging to that add-on in about:memory.

Jordan Miner reported that the Delicious Bookmarks add-on is leaking excessive numbers of connections to the Places database.  I tried to reproduce the problem and failed, but I don’t have a Delicious account.  If anyone else who does have a Delicious account is able to reproduce, please let me know or comment in the bug.

Finally, Jorge Villalobos and Andrew Williamson from the add-ons team came to this week’s MemShrink meeting.  We had a very fruitful discussion about how to help add-on authors detect and avoid memory leaks, and how to help add-on users understand which add-ons have high memory consumption.  Stay tuned for more details in the coming weeks!

Three Cheers for New Contributors

Krzysztof Kotlenga, a new contributor, removed an OpenGL cache that was wasting 1GB+ of memory on Linux when hardware acceleration was enabled (it’s currently not enabled by default, but will be at some point in the future).   Great work, Krzysztof!  [Update: I originally wrote WebGL instead of OpenGL, which was incorrect.]

Bug Counts

Here are the current bug counts.

  • P1: 26 (-2/+0)
  • P2: 132 (-14/+3)
  • P3: 67 (-2/+5)
  • Unprioritized: 4 (-0/+4)

That’s a net reduction of six bugs.  The main factor here was that I went through some of our P2s and closed ones that were stale and/or covered by other bugs, and downgraded to P3 a few more that are now known to be less important than we first thought.

Before finishing, I’d like to highlight bug 703427.  Richard Hipp, one of the SQLite developers, has a tiny patch that drastically reduces the amount of memory used by SQLite in Firefox — I ran with it for a while and my SQLite memory consumption dropped from ~15MB to less than 3MB.  The patch also causes a moderate speed drop, but it’s unclear if that speed drop is noticeable to Firefox.  We need someone who understands Firefox’s SQLite usage well to evaluate this patch so that Richard knows if it’s something that should go into a released version of SQLite as an option.  The bug is assigned to Marco Bonardo but he’s currently very busy.  Is there anyway else who knows enough about SQLite to take on this bug?

Categories
add-ons Firefox MemShrink

MemShrink progress, weeks 28–29

It’s been a quiet couple of weeks, due to Christmas, New Year, and all that.

Two leaks in add-ons were fixed.

  • cyberscorpio fixed a zombie compartment in the Super Start add-on.  The problem was that the add-on was failing to remove an event observer when a page is unloaded.  The fix will be present in v3.6.2.
  • Jesse Hakanen fixed a zombie compartment in the AdBlock Plus Pop-up Addon add-on (note that this is add-on extends the blocking functionality of AdBlock Plus, it’s not AdBlock Plus itself).  The problem was that the add-on was failing to catch the page unload event, and so some references to closed pages were left in memory.  The fix is now available in v0.3.

Because both leaks had similar causes, I updated the documentation on avoiding zombie compartments in add-ons accordingly.

Patches for two other MemShrink bugs landed.

I saw two noteworthy mentions of Firefox’s recent reductions in memory consumption. The first was in webmonkey.com’s review of Firefox 9.

Alongside the faster JavaScript processing Firefox 9 continues to show improvements from Mozilla’s MemShrink project, an ongoing effort to reduce memory usage in the browser. Indeed, for the first time in a very long time my testing showed Firefox 9 using less memory than Opera (which has long been the least RAM-hungry browser I test). Opening the same dozen tabs in both Firefox and Opera used only 367MB of RAM in Firefox compared to 378MB in Opera 11.60 [Update: Note that the memory test was performed with the following Firefox add-ons running: AdBlock, Ghostery, BetterPrivacy and HTTPS-Everywhere.] There’s no longer much difference between the two, which is a testament to Firefox’s dramatic improvement over the last six months of MemShrink efforts.

Cross-browser memory comparisons are fraught with difficulties, both in the measuring and the interpreting, so I don’t give the comparison with Opera much weight.  But the fact that this author has seen Firefox’s own memory consumption drop over the last few versions is meaningful.

The second was a comment on the previous MemShrink progress report from AC about Firefox 10.

You may remember that I posted a few weeks ago saying that Firefox 9 beta was great and that I could definitely see more memory and performance improvements over Fire 7 and 8. Well I am pleased to say that after a few days of using Firefox 10 beta 1, I can clearly see more memory improvements again. There is an obvious difference between Firefox 9 and Firefox 10 on the beta channel.

I have no hard evidence as such other than saying that nothing has changed on my laptop in the last few weeks but I am using the same heavy tab load and the memory usage displayed by Windows task manager is lower than it was by about 20 MB and the responsiveness and speed feels sharp.

Whatever you guys are doing with the Memshrink project, keep doing it because the results are paying dividends.

I’ve read literally 1000s of complaints about Firefox’s memory consumption online, so it’s very encouraging to hear compliments.

Here are the current bug counts.

  • P1: 28 (-0/+2)
  • P2: 143 (-1/+5)
  • P3: 64 (-1/+1)
  • Unprioritized: 0 (-0/+0)

No big changes, unsurprisingly.  [Update: the P2 numbers were incorrect and have been fixed.]

Categories
about:memory AdBlock Plus Firefox Memory consumption MemShrink

MemShrink progress, week 27

Adaptive Memory Behaviour

The biggest news this week was that Justin Lebar landed code to make Firefox’s memory consumption adaptive on Windows.  The code works by wrapping VirtualAlloc() and some similar OS-level allocation functions.  Each time memory is allocated, the amount of available virtual and physical memory is measured by the wrapper.  If either of those amounts are below a threshold, a memory pressure event is triggered, which causes numerous components within Firefox to take steps to reduce their memory consumption.

It might not sound like it, but this is a big deal, and it was a MemShrink:P1 bug.  Firefox runs on a huge variety of machines and devices, and trying to find one-size-fits-all heuristics for discarding regenerable data is a recipe for unhappiness somewhere.  Adaptive behaviour is much better.

The initial settings for this feature are conservative, and some tuning may be needed.

  • The threshold used for virtual memory is 128MB, i.e. if the amount of available virtual memory drops below 128MB a memory pressure event is triggered.  (This number can be changed in about:config via the “memory.low_virtual_memory_threshold_mb” setting.) Hopefully this will reduce the number of OOM crashes that occur on Windows.
  • The threshold used for physical memory is 0, which means that it is currently ignored.  (This number can be changed in about:config via the “memory.low_physical_mem_threshold_mb” setting.) The reason for ignoring it currently is that low physical memory may be caused by a program other than Firefox — we don’t want to start discarding data if Firefox is only using a small fraction of physical memory.  In the future we can probably do better here by triggering memory pressure events if the fraction of physical memory used by Firefox exceeds some threshold;  hopefully this will avoid paging when memory usage is high.
  • Memory pressure events are throttled so they aren’t sent more than once every 10 seconds.  (This number can be changed in about:config via the “memory.low_physical_memory_notification_interval_ms” setting.)  This prevents thrashing in cases where the memory pressure events don’t help reduce memory consumption.

Memory pressure events are also currently unevenly observed — plenty of data could be dropped that currently isn’t — so there is room for improvement there.

Finally, there’s another bug still open to add similar adaptive behaviour on Mac, Linux and Android.  Note also that memory pressure events are also triggered on Android when Firefox goes into the background.

Add-ons

There has been good news for add-ons:  the #1, #6 and #8 most popular add-ons on AMO have all recently seen some MemShrink-related improvements.

Adding memory reporters to add-ons is a really good thing to do, and I plan to write some documentation to encourage other add-on developers to follow suit.  However, if you are an add-on developer and are thinking of doing this, I need to stress one thing:  the reports must have KIND_OTHER so that they end up in the “Other Measurements” list in about:memory.  If they are put in the “explicit” tree, then some memory bytes may be counted twice (once by Firefox, once by the add-on) which will completely screw up about:memory’s results.

Memory Reporters

I landed several improvements to memory reporters.

Bug Counts

Here are the current bug counts.

  • P1: 26 (-2/+1)
  • P2: 139 (-6/+6)
  • P3: 64 (-2/+2)
  • Unprioritized: 0 (-1/+0)

Nice to see the total count drop, even if only by one.

Due to the Christmas break there won’t be a MemShrink meeting nor a MemShrink progress report next week.  I’ll be back in two weeks, enjoy the break!

Categories
about:memory Firefox Memory consumption MemShrink

MemShrink progress, week 26

It’s been six months since MemShrink started!  Here are this week’s events of note.

Memory Reporting

I integrated DMD support into Firefox.  This is a big deal because DMD is crucial for improving the coverage and correctness of memory reporters.  There are several consequences of this.

  • You no longer need to apply a patch to Firefox if you want to use DMD, you just have to configure with --enable-dmd.  (Full instructions are here.)
  • Any memory reporter written in the new style (i.e. using a nsMallocSizeOfFun function created with NS_MEMORY_REPORTER_MALLOC_SIZEOF_FUN) will automatically get coverage in DMD.
  • DMD finds bugs in memory reporters, but it can also find bugs elsewhere.  For example, this week it found that some BaseShapes were being shared incorrectly, which Brian Hackett fixed.

I updated the layout memory reporters to use the new style.  I also updated the documentation on writing memory reporters, to cover more complicated topics like how to measure classes that involve inheritance.  I also added a request:  if you write a memory reporter, please add me as a co-reviewer.

Johnny Stenback implemented per-window DOM memory reporters, which gives much more detail for DOM memory usage in about:memory.  Here’s an example:

per-window dom reporter output in about:memory

Memory Usage Improvements

Boris Zbarsky lazily initialized some rulehash tables, saving about 45KB per blank tab;  this is a nice win for those people who have on-demand tab loading combined with sessions with many tabs.  The same patch also made one of the rulehash tables smaller when it is initialized, which can cause multi-MB savings on workloads of only a few tabs.

Bobby Holley fixed a bad memory leak in xpconnect that could cause unbounded amounts of memory to be leaked, albeit in somewhat unusual circumstances.  The bug was only in Firefox 9 and Firefox 10;  it’s been fixed for 10, and the patches to fix for 9 are ready to land.

Fabrice Desré fixed a regression that caused a zombie compartment when visiting addons.mozilla.org.

Bug Counts

Here are the current bug counts.

  • P1: 27 (-1/+1)
  • P2: 139 (-5/+3)
  • P3: 62 (-0/+2)
  • Unprioritized: 1 (-0/+1)

That’s a net change of +1, and we only had to triage seven bugs in today’s meeting.  And if the trees hadn’t been closed for the past few days I could have fixed two more P2 bugs.

Just for fun, I also counted the number of RESOLVED FIXED (i.e. genuine, not duplicates or invalid) MemShrink bugs.

  • P1: 29
  • P2: 70
  • P3: 14
  • Unprioritized: 35

(If you’re wondering why the number of unprioritized fixed bugs is so high, it’s because MemShrink bugs are only prioritized in MemShrink meetings, which means that any bug that gets filed and fixed in less than 7 days doesn’t get prioritized.)

So there are more bugs still open than have been fixed, but the rate of bug growth is close to zero.

Categories
about:memory Firefox Memory consumption MemShrink

MemShrink progress, week 25

It was a good week for MemShrink.

ObjShrink

The biggest news this week is that Brian Hackett landed his objshrink work.  This has roughly halved the size of Firefox’s representation of JavaScript objects.  It’s also reduced the amount of memory Firefox uses for shapes, a related data structure.  If  you look at about:memory you can see that “gc-heap/objects” and “gc-heap/shapes” entries account for many MBs of memory, so this is a big win.  Read here (under the “Objects” and “Shape” headings) if you want more details.

Memory Reporting

I added some more JavaScript memory reporters and fixed some problems with the existing ones.  Specifically…

  • New: “runtime/threads/temporary” counts various pieces of short-lived data, mostly parse nodes.  It sometimes spikes up to multiple MBs.
  • New: “runtime/threads/normal” measures thread data on the heap that isn’t covered by “runtime/threads/temporary”.  It’s normally a few 100s of KBs.
  • New: “runtime/contexts” measures JSContexts and various related structures, which usually account for a few 10s or 100s of KB.
  • New/fixed: “runtime/threads/regexp-code” measures code generated by the regexp compiler.  This was previously measured under “mjit-code/regexp” but that reporter was broken recently when the location of this generated code was moved, so I fixed it.
  • Fixed: “runtime/atoms-table” and “runtime/runtime-object” are existing reporters that together often account for up to 4.5MB.  They were incorrectly marked as measuring non-heap memory instead of heap memory, meaning that the “explicit” and “heap-unclassified” totals were both incorrectly inflated by that amount.  While fixing this, I also confirmed that we weren’t making the same mistake with any of our other memory reporters.
  • Renamed: “runtime/threads/stack-committed” was previously called “stack-size”.  I renamed it to go with the other “runtime/threads” reports because that’s where it conceptually belongs.

I also added a memory reporter for XPConnect.  It typically accounts for 1MB or more.

I have a lot more work in the pipeline to improve the coverage and correctness of memory reporters.  More about this in the coming weeks as I land the relevant patches!

Other

Joel Maher got RSS memory measurements working for Android on Talos, Mozilla’s performance regression testing suite.  This was a MemShrink:P1 bug.  Graphs of the live data can be seen here.

Benoit Jacob made some improvements how Firefox manages and reports memory used by WebGL.

Bug Counts

Here’s the current bug count.

  • P1: 27 (-1/+1)
  • P2: 141 (-3/+5)
  • P3: 62 (-1/+2)
  • Unprioritized: 0 (-0/+0)

Only three more MemShrink bugs than this time last week!  Pretty good.