Category Archives: MemShrink

MemShrink status

Mozilla’s MemShrink project started almost six years ago. It successfully reduced Firefox’s memory consumption to the point where Firefox is now widely (and accurately) seen as the least memory-hungry browser.

Most of the big improvements were made in the first two or three years of the MemShrink project, and it doesn’t get much publicity any more, but it is still ticking along quietly: meetings occur, regressions are tracked, measurements are made, and so on. Some time ago I passed leadership of it to the capable hands of Eric Rahm, who has just written a status update that is worth reading.

Eric also wrote recently about the reincarnation of areweslimyet.com. He maintained that site for several years, which was a thankless but important task, but he won’t need to any more because its functionality has been folded into Mozilla’s main performance monitoring tools. Thank you, Eric!

Firefox 64-bit for Windows can take advantage of more memory

By default, on Windows, Firefox is a 32-bit application. This means that it is limited to using at most 4 GiB of memory, even on machines that have more than 4 GiB of physical memory (RAM). In fact, depending on the OS configuration, the limit may be as low as 2 GiB.

Now, 2–4 GiB might sound like a lot of memory, but it’s not that unusual for power users to use that much. This includes:

  • users with many (dozens or even hundreds) of tabs open;
  • users with many (dozens) of extensions;
  • users of memory-hungry web sites and web apps; and
  • users who do all of the above!

Furthermore, in practice it’s not possible to totally fill up this available space because fragmentation inevitably occurs. For example, Firefox might need to make a 10 MiB allocation and there might be more than 10 MiB of unused memory, but if that available memory is divided into many pieces all of which are smaller than 10 MiB, then the allocation will fail.

When an allocation does fail, Firefox can sometimes handle it gracefully. But often this isn’t possible, in which case Firefox will abort. Although this is a controlled abort, the effect for the user is basically identical to an uncontrolled crash, and they’ll have to restart Firefox. A significant fraction of Firefox crashes/aborts are due to this problem, known as address space exhaustion.

Fortunately, there is a solution to this problem available to anyone using a 64-bit version of Windows: use a 64-bit version of Firefox. Now, 64-bit applications typically use more memory than 32-bit applications. This is because pointers, a common data type, are twice as big; a rough estimate for 64-bit Firefox is that it might use 25% more memory. However, 64-bit applications also have a much larger address space, which means they can access vast amounts of physical memory, and address space exhaustion is all but impossible. (In this way, switching from a 32-bit version of an application to a 64-bit version is the closest you can get to downloading more RAM!)

Therefore, if you have a machine with 4 GiB or less of RAM, switching to 64-bit Firefox probably won’t help. But if you have 8 GiB or more, switching to 64-bit Firefox probably will help the memory usage situation.

Official 64-bit versions of Firefox have been available since December 2015. If the above discussion has interested you, please try them out. But note the following caveats.

  • Flash and Silverlight are the only supported 64-bit plugins.
  • There are some Flash content regressions due to our NPAPI sandbox (for content that uses advanced features like GPU acceleration or microphone APIs).

On the flip side, as well as avoiding address space exhaustion problems, a security feature known as ASLR works much better in 64-bit applications than in 32-bit applications, so 64-bit Firefox will be slightly more secure.

Work is being ongoing to fix or minimize the mentioned caveats, and it is expected that 64-bit Firefox will be rolled out in increasing numbers in the not-too-distant future.

UPDATE: Chris Peterson gave me the following measurements about daily active users on Windows.

  • 66.0% are running 32-bit Firefox on 64-bit Windows. These users could switch to a 64-bit Firefox.
  • 32.3% are running 32-bit Firefox on 32-bit Windows. These users cannot switch to a 64-bit Firefox.
  • 1.7% are running 64-bit Firefox already.

UPDATE 2: Also from Chris Peterson, here are links to 64-bit builds for all the channels:

More compacting GC

Jon Coppeard recently extended SpiderMonkey’s compacting GC abilities. Previously, the GC could only compact GC arena containing JavaScript objects. Now it can also compact arenas containing shapes (a data structure used within SpiderMonkey which isn’t visible to user code) and strings, which are two of the largest users of memory in the GC heap after objects.

These improvements should result in savings of multiple MiBs in most workloads, and they are on track to ship in Firefox 48, which will be released in early August. Great work, Jon!

Compacting GC

Go read Jon Coppeard’s description of the compacting GC algorithm now used by SpiderMonkey!

Firefox 41 will use less memory when running AdBlock Plus

Last year I wrote about AdBlock Plus’s effect on Firefox’s memory usage. The most important part was the following.

First, there’s a constant overhead just from enabling ABP of something like 60–70 MiB. […] This appears to be mostly due to additional JavaScript memory usage, though there’s also some due to extra layout memory.

Second, there’s an overhead of about 4 MiB per iframe, which is mostly due to ABP injecting a giant stylesheet into every iframe. Many pages have multiple iframes, so this can add up quickly. For example, if I load TechCrunch and roll over the social buttons on every story […], without ABP, Firefox uses about 194 MiB of physical memory. With ABP, that number more than doubles, to 417 MiB.

An even more extreme example is this page, which contains over 400 iframes. Without ABP, Firefox uses about 370 MiB. With ABP, that number jumps to 1960 MiB.

(This description was imprecise; the overhead is actually per document, which includes both top-level documents in a tab and documents in iframes.)

Last week Mozilla developer Cameron McCormack landed patches to fix bug 77999, which was filed more than 14 years ago. These patches enable sharing of CSS-related data — more specifically, they add data structures that share the results of cascading user agent style sheets — and in doing so they entirely fix the second issue, which is the more important of the two.

For example, on the above-mentioned “extreme example” (a.k.a. the Vim Color Scheme Test) memory usage dropped by 3.62 MiB per document. There are 429 documents on that page, which is a total reduction of about 1,550 MiB, reducing memory usage for that page down to about 450 MiB, which is not that much more than when AdBlock Plus is absent. (All these measurements are on a 64-bit build.)

I also did measurements on various other sites and confirmed the consistent saving of ~3.6 MiB per document when AdBlock Plus is enabled. The number of documents varies widely from page to page, so the exact effect depends greatly on workload. (I wanted to test TechCrunch again, but its front page has been significantly changed so it no longer triggers such high memory usage.) For example, for one of my measurements I tried opening the front page and four articles from each of nytimes.com, cnn.com and bbc.co.uk, for a total of 15 tabs. With Cameron’s patches applied Firefox with AdBlock Plus used about 90 MiB less physical memory, which is a reduction of over 10%.

Even when AdBlock Plus is not enabled this change has a moderate benefit. For example, in the Vim Color Scheme Test the memory usage for each document dropped by 0.09 MiB, reducing memory usage by about 40 MiB.

If you want to test this change out yourself, you’ll need a Nightly build of Firefox and a development build of AdBlock Plus. (Older versions of AdBlock Plus don’t work with Nightly due to a recent regression related to JavaScript parsing). In Firefox’s about:memory page you’ll see the reduction in the “style-sets” measurements. You’ll also see a new entry under “layout/rule-processor-cache”, which is the measurement of the newly shared data; it’s usually just a few MiB.

This improvement is on track to make it into Firefox 41, which is scheduled for release on September 22, 2015. For users on other release channels, Firefox 41 Beta is scheduled for release on August 11, and Firefox 41 Developer Edition is scheduled to be released in the next day or two.

Cumulative heap profiling in Firefox with DMD

DMD is a tool that I originally created to help identify where new memory reporters should be added to Firefox in order to reduce the “heap-unclassified” measurement in about:memory. (The name is actually short for “Dark Matter Detector”, because we sometimes call the “heap-unclassified” measurement “dark matter“.)

Recently, I’ve modified DMD to become a more general heap profiling tool. It now has three distinct modes of operation.

  1. “Dark matter”: this mode gives you DMD’s original behaviour.
  2. “Live”: this mode tracks all the live blocks on the system heap, and lets you take snapshots at particular points in time.
  3. Cumulative“: this mode tracks all the blocks that have ever been allocated on the system heap, and so gives you information about all the allocations done by Firefox during an entire session.

Most memory profilers (including as about:memory) are snapshot-based, and so work much like DMD’s “live” mode. But “cumulative” mode is equally interesting.

In particular, unlike “live” mode, “cumulative” mode tells you about parts of the code that are responsible for allocating many short-lived heap blocks (sometimes called “heap churn”). Such allocations can hurt performance: allocations and deallocations themselves aren’t free, particularly because they require obtaining a global lock; such code often involves unnecessary initialization or copying of heap data; and if these allocations come in a variety of sizes they can cause additional heap fragmentation.

Another nice thing about cumulative heap profiling is that, unlike live heap profiling, you don’t have to decide when to take snapshots. You can just profile an entire workload of interest and get the results at the end.

I’ve used DMD’s cumulative mode to find inefficiencies in SpiderMonkey’s source compression  and code generation, SQLite, NSS, nsTArray, XDR encoding, Gnome start-up, IPC messaging, nsStreamLoader, cycle collection, and safe-browsing. There are “start doing something clever” optimizations and then there are “stop doing something stupid” optimizations, and every one of these fixes has been one of the latter. Each change has avoided cumulative heap allocations ranging from tens to thousands of MiBs.

It’s typically difficult to quantify any speed-ups from these changes, because the workloads are noisy and non-deterministic, but I’m convinced that simple changes to fix these issues are worthwhile. For one, I used cumulative profiling (via a different tool) to drive the major improvements I made to pdf.js earlier this year. Also, Chrome developers have found that “Chrome in general is *very* close to the threshold where heap lock contention causes noticeable UI lag”.

So far I have only profiled a few simple workloads. There are all sorts of things I haven’t tried: text-heavy pages, image-heavy pages, audio and video, WebRTC, WebGL, popular benchmarks… the list goes on. I intend to do more profiling and fix things where I can, but it would be great to have help from domain experts with this stuff. If you want to try out cumulative heap profiling in Firefox, please read the DMD instructions and feel free to ask me for help. In particular, I now have a good feel for which hot allocations are unavoidable and reasonable — plenty of them, of course — and which are avoidable. Let’s get rid of the avoidable ones.

Better documentation for memory profiling and leak detection tools

Until recently, the documentation for all of Mozilla’s memory profiling and leak detection tools had some major problems.

  • It was scattered across MDN, the Mozilla Wiki, and the Mozilla archive site (yes, really).
  • Documentation for several tools was spread across multiple pages.
  • Documentation for some tools was meagre, non-existent, or overly verbose.
  • Some of the documentation was out of date, e.g. describing tools that no longer exist.

A little while back I fixed these problems.

  • The documentation for these tools is now all on MDN. If you look at the MDN Performance page in the “Memory profiling and leak detection tools” section, you’ll see a brief description of each tool that explains the circumstances in which it is useful, and a link to the relevant documentation.
  • The full list of documented tools includes: about:memory, DMD, areweslimyet.com, BloatView, Refcount tracing and balancing, GC and CC logs, Valgrind, LeakSanitizer, Apple tools, TraceMalloc, Leak Gauge, and LogAlloc.
  • As well as consolidating all the pages in one place, I also improved some of the pages (with the help of people like Andrew McCreight). In particular, about:memory now has reasonably detailed documentation, something it has lacked until now.

Please take a look, and if you see any problems let me know. Or, if you’re feeling confident just fix things yourself! Thanks.

Please grow your buffers exponentially

If you record every heap allocation and re-allocation done by Firefox you find some interesting things. In particular, you find some sub-optimal buffer growth strategies that cause a lot of heap churn.

Think about a data structure that involves a contiguous, growable buffer, such as a string or a vector. If you append to it and it doesn’t have enough space for the appended elements, you need to allocate a new buffer, copy the old contents to the new buffer, and then free the old buffer. realloc() is usually used for this, because it does these three steps for you.

The crucial question: when you have to grow a buffer, how much do you grow it? One obvious answer is “just enough for the new elements”. That might seem  space-efficient at first glance, but if you have to repeatedly grow the buffer it can quickly turn bad.

Consider a simple but not outrageous example. Imagine you have a buffer that starts out 1 byte long and you add single bytes to it until it is 1 MiB long. If you use the “just-enough” strategy you’ll cumulatively allocate this much memory:

1 + 2 + 3 + … + 1,048,575 + 1,048,576 = 549,756,338,176 bytes

Ouch. O(n2) behaviour really hurts when n gets big enough. Of course the peak memory usage won’t be nearly this high, but all those reallocations and copying will be slow.

In practice it won’t be this bad because heap allocators round up requests, e.g. if you ask for 123 bytes you’ll likely get something larger like 128 bytes. The allocator used by Firefox (an old, extensively-modified version of jemalloc) rounds up all requests between 4 KiB and 1 MiB to the nearest multiple of 4 KiB. So you’ll actually allocate approximately this much memory:

4,096 + 8,192 + 12,288 + … + 1,044,480 + 1,048,576 = 134,742,016 bytes

(This ignores the sub-4 KiB allocations, which in total are negligible.) Much better. And if you’re lucky the OS’s virtual memory system will do some magic with page tables to make the copying cheap. But still, it’s a lot of churn.

A strategy that is usually better is exponential growth. Doubling the buffer each time is the simplest strategy:

4,096 + 8,192 + 16,384 + 32,768 + 65,536 + 131,072 + 262,144 + 524,288 + 1,048,576 = 2,093,056 bytes

That’s more like it; the cumulative size is just under twice the final size, and the series is short enough now to write it out in full, which is nice — calls to malloc() and realloc() aren’t that cheap because they typically require acquiring a lock. I particularly like the doubling strategy because it’s simple and it also avoids wasting usable space due to slop.

Recently I’ve converted “just enough” growth strategies to exponential growth strategies in XDRBuffer and nsTArray, and I also found a case in SQLite that Richard Hipp has fixed. These pieces of code now match numerous places that already used exponential growth: pldhash, JS::HashTable, mozilla::Vector, JSString, nsString, and pdf.js.

Pleasingly, the nsTArray conversion had a clear positive effect. Not only did the exponential growth strategy reduce the amount of heap churn and the number of realloc() calls, it also reduced heap fragmentation: the “heap-overhead” part of the purple measurement on AWSY (a.k.a. “RSS: After TP5, tabs closed [+30s, forced GC]”) dropped by 4.3 MiB! This makes sense if you think about it: an allocator can fulfil power-of-two requests like 64 KiB, 128 KiB, and 256 KiB with less waste than it can awkward requests like 244 KiB, 248 KiB, 252 KiB, etc.

So, if you know of some more code in Firefox that uses a non-exponential growth strategy for a buffer, please fix it, or let me know so I can look at it. Thank you.

Per-class JS object and shape measurements in Firefox’s about:memory

A few days ago I landed support for per-class reporting of JavaScript objects and shapes in about:memory. (Shapes are auxiliary, engine-internal data structures that are used to facilitate object property accesses. They can use large amounts of memory.)

Prior to this patch, the JavaScript objects and shapes within a single compartment (which corresponds to a JavaScript window or global object) would be covered by measurements in a small number of fixed categories.

10,179,152 B (02.59%) -- objects
├───6,749,600 B (01.72%) -- gc-heap
│   ├──3,512,640 B (00.89%) ── dense-array
│   ├──2,965,184 B (00.75%) ── ordinary
│   └────271,776 B (00.07%) ── function
├───3,429,552 B (00.87%) -- malloc-heap
│   ├──2,377,600 B (00.61%) ── slots
│   └──1,051,952 B (00.27%) ── elements/non-asm.js
└───────────0 B (00.00%) ── non-heap/code/asm.js
474,144 B (00.12%) -- shapes
├──316,832 B (00.08%) -- gc-heap
│  ├──167,320 B (00.04%) -- tree
│  │  ├──152,400 B (00.04%) ── global-parented
│  │  └───14,920 B (00.00%) ── non-global-parented
│  ├──125,352 B (00.03%) ── base
│  └───24,160 B (00.01%) ── dict
└──157,312 B (00.04%) -- malloc-heap
   ├───99,328 B (00.03%) ── compartment-tables
   ├───35,040 B (00.01%) ── tree-tables
   ├───12,704 B (00.00%) ── dict-tables
   └───10,240 B (00.00%) ── tree-shape-kids

These measurements are only interesting to those who understand the guts of the JavaScript engine.

In contrast, objects and shapes are now grouped by their class. Per-class measurements relate back to the JavaScript code in a more obvious way, making these measurements useful to a wider range of people.

10,515,296 B (02.69%) -- classes
├───4,566,840 B (01.17%) ++ class(Array)
├───3,618,464 B (00.93%) ++ class(Object)
├───1,755,232 B (00.45%) ++ class(HTMLDivElement)
├─────333,624 B (00.09%) ++ class(Function)
├─────165,624 B (00.04%) ++ class(<non-notable classes>)
├──────38,736 B (00.01%) ++ class(Window)
└──────36,776 B (00.01%) ++ class(CSS2PropertiesPrototype)

(The <non-notable classes> entry aggregates all classes that are smaller than a certain threshold. This prevents any long tail of classes from bloating about:memory too much.)

Expanding the sub-tree for the Object class, we see that the fixed categories are still present, for those who are interested in them.

3,618,464 B (00.93%) -- class(Object)
├──3,540,672 B (00.91%) -- objects
│  ├──2,349,632 B (00.60%) -- malloc-heap
│  │  ├──2,348,480 B (00.60%) ── slots
│  │  └──────1,152 B (00.00%) ── elements/non-asm.js
│  └──1,191,040 B (00.30%) ── gc-heap
└─────77,792 B (00.02%) -- shapes
      ├──57,376 B (00.01%) -- gc-heap
      │  ├──47,120 B (00.01%) ── tree
      │  ├───5,360 B (00.00%) ── dict
      │  └───4,896 B (00.00%) ── base
      └──20,416 B (00.01%) -- malloc-heap
         ├──11,552 B (00.00%) ── tree-tables
         ├───6,912 B (00.00%) ── tree-kids
         └───1,952 B (00.00%) ── dict-tables

Although the per-class measurements often aren’t surprising — Object and Array objects and shapes often dominate — sometimes they are. Consider the following examples.

  • The above example has 1.7 MiB of HTMLDivElement objects and shapes, which indicates that the compartment contains many div elements.
  • If you have lots of memory used by Function objects and shapes, it suggests that the code is creating excessive numbers of closures.
  • Just this morning a visitor to the #memshrink IRC channel was wondering why they had 11 MiB of XPC_WN_NoMods_NoCall_Proto_JSClass objects and shapes in one compartment. (This is a question I currently don’t have a good answer for.)

Historically, the data-dependent measurements in about:memory — e.g. those done on a per-tab, or per-compartment, or per-image, or per-script basis — have been more useful and interesting than the ones in fixed categories, because they map obviously to browser and code artifacts. For example, per-tab measurements let you know if a particular web page is using excessive memory, and per-compartment measurements revealed the existence of zombie compartments, a kind of bad memory leak that used to be common in Firefox and its add-ons.

I’m hoping that these per-class measurements will prove similarly useful. Keep an eye on them, and please let me know and/or file bugs if you see any surprising cases.

A final note: Mozilla’s devtools team is currently making great progress on a JavaScript memory profiler, which will give finer-grained measurements of JavaScript memory usage in web content. Although there will be some overlap between that tool and these new measurements in about:memory, it will useful to have both tools, because each one will be appropriate in different circumstances.

The story of a tricky bug

The Bug Report

A few weeks ago I skimmed through /r/firefox and saw a post by a user named DeeDee_Z complaining about high memory usage in Firefox. Somebody helpfully suggested that DeeDee_Z look at about:memory, which revealed thousands of blank windows like this:

  │    │  ├────0.15 MB (00.01%) ++ top(about:blank, id=1001)
  │    │  ├────0.15 MB (00.01%) ++ top(about:blank, id=1003)
  │    │  ├────0.15 MB (00.01%) ++ top(about:blank, id=1005

I filed bug 1041808 and asked DeeDee_Z to sign up to Bugzilla so s/he could join the discussion. What followed was several weeks of back and forth, involving suggestions from no fewer than seven Mozilla employees. DeeDee_Z patiently tried numerous diagnostic steps, such as running in safe mode, pasting info from about:support, getting GC/CC logs, and doing a malware scan. (Though s/he did draw the line at running wireshark to detect if any unusual network activity was happening, which I think is fair enough!)

But still there was no progress. Nobody else was able to reproduce the problem, and even DeeDee_Z had trouble making it happen reliably.

And then on August 12, more than three weeks after the bug report was filed, Peter Van der Beken commented that he had seen similar behaviour on his machine, and by adding some logging to Firefox’s guts he had a strong suspicion that it was related to having the “keep until” setting for cookies set to “ask me every time”. DeeDee_Z had the same setting, and quickly confirmed that changing it fixed the problem. Hooray!

I don’t know how Peter found the bug report — maybe he went to file a new bug report about this problem and Bugzilla’s duplicate detection identified the existing bug report — but it’s great that he did. Two days later he landed a simple patch to fix the problem. In Peter’s words:

The patch makes the dialog for allowing/denying cookies actually show up when a cookie is set through the DOM API. Without the patch the dialog is created, but never shown and so it sticks around forever.

This fix is on track to ship in Firefox 34, which is due to be released in late November.

Takeaway lessons

There are a number of takeaway lessons from this story.

First, a determined bug reporter is enormously helpful. I often see vague complaints about Firefox on websites (or even in Bugzilla) with no responses to follow-up questions. In contrast, DeeDee_Z’s initial complaint was reasonably detailed. More importantly, s/he did all the follow-up steps that people asked her/him to do, both on Reddit and in Bugzilla. The about:memory data made it clear it was some kind of window leak, and although the follow-up diagnostic steps didn’t lead to the fix in this case, they did help rule out a number of possibilities. Also, DeeDee_Z was extremely quick to confirm that Peter’s suggestion about the cookie setting fixed the problem, which was very helpful.

Second, many (most?) problems don’t affect everyone. This was quite a nasty problem, but the “ask me every time” setting is not commonly used because causes lots of dialogs to pop up, which few users have the patience to deal with. It’s very common that people have a problem with Firefox (or any other piece of software), incorrectly assume that it affects everyone else equally, and conclude with “I can’t believe anybody uses this thing”. I call this “your experience is not universal“. This is particular true for web browsers, which unfortunately are enormously complicated and have many combinations of settings get little or no testing.

Third, and relatedly, it’s difficult to fix problems that you can’t reproduce. It’s only because Peter could reproduce the problem that he was able to do the logging that led him to the solution.

Fourth, it’s important to file bug reports in Bugzilla. Bugzilla is effectively the Mozilla project’s memory, and it’s monitored by many contributors. The visibility of a bug report in Bugzilla is vastly higher than a random complaint on some other website. If the bug report hadn’t been in Bugzilla, Peter wouldn’t have stumbled across it. So even if he had fixed it, DeeDee_Z wouldn’t have known and probably would have had been stuck with the problem until Firefox 34 came out. That’s assuming s/he didn’t switch to a different browser in the meantime.

Fifth, Mozilla does care about memory usage, particularly cases where memory usage balloons unreasonably. We’ve had a project called MemShrink running for more than three years now. We’ve fixed hundreds of problems, big and small, and continue to do so. Please use about:memory to start the diagnosis, and add the “[MemShrink]” tag to any bug reports in Bugzilla that relate to memory usage, and we will triage them in our fortnightly MemShrink meetings.

Finally, luck plays a part. I don’t often look at /r/firefox, and I could have easily missed DeeDee_Z’s complaint. Also, it was lucky that Peter found the bug in Bugzilla. Many tricky bugs don’t get resolved this quickly.