Category Archives: Memory allocation

MemShrink progress, week 31

Lots of good stuff happened this week in MemShrink-land.

Necko Buffer Cache

I removed the Necko buffer cache.  This cache used nsRecyclingAllocator to delay the freeing of up to 24 short-lived 32KB chunks (24 x 16KB on Mobile/B2G) in the hope that they could be reused soon.  The cache would fill up very quickly and the chunks wouldn’t be freed unless zero re-allocations occurred for 15 minutes.  This idea is to avoid malloc/free churn, but the re-allocations weren’t that frequent, and heap allocators like jemalloc tend to handle cases like this pretty well, so performance wasn’t affected noticeably.  Removing the cache has the following additional benefits.

  • nsRecyclingAllocator added a clownshoe-ish extra word to each chunk, which meant that the 32KB (or 16KB) allocations were being rounded up by jemalloc to 36KB (or 20KB), resulting in 96KB (24 x 4KB) of wasted memory.
  • This cache wasn’t covered by memory reporters, so about:memory’s “heap-unclassified” number has dropped by 864KB (24 x 36KB) on desktop and 480KB (24 x 20KB) on mobile.

I also removed the one other use of nsRecyclingAllocator in libjar, for which this recycling behaviour also had no noticeable effect.  This meant I could then remove nsRecyclingAllocator itself.  Taras Glek summarized things nicely:  “I’m happy to see placebos go away”.

Other Stuff

I gave a talk at the linux.conf.au Browser MiniConf entitled “Notes on Reducing Firefox’s Memory Consumption”, which got some attention on Slashdot.

Henrik Skupin reported that Ghostery 2.6.2 and 2.7beta2 have memory leaks (zombie compartments).  I contacted the authors today and they said they are looking into the problem, so hopefully we’ll see a fix soon.

Gian-Carlo Pascutto reworked the code that downloads the safe browsing database to use less memory, and to have a fallback if memory runs out.  This memory usage is transient and so the main benefit is that it prevents some out-of-memory crashes that were happening frequently.

Last week I mentioned bug 703427, which held up the possibility of a simple, big reduction in SQLite memory usage.  Marco Bonardo did some analysis, and unfortunately the patch caused large increases in the number of disk accesses, and so it won’t work.  A shame.

Kyle Huey fixed a zombie compartment that occurred when right-clicking on a single-line textbox.  The fun thing about this was that in only 3 hours and 35 minutes, the following events happened: the bug report was filed, the problem was confirmed by two people, the bug report was re-categorized into the appropriate component, a patch was posted, the patch was reviewed, the patch landed on mozilla-central, and the bug report was marked as fixed!  And approval for back-porting to Aurora was granted 8 hours later.  Not bad.  Kyle has also made progress on a more frequent zombie compartment caused by searching for text.

Jonathan Kew made the shaped-word caches (which are involved in text rendering) discard their data on memory pressure events.

Quote of the week

A commenter on my blog named jas said (the full comment is here):

a year ago, FF’s memory usage was about 10x what chrome was using in respect to the sites we were running…

so we have switched to chrome…

i just tested FF 9.0.1 against chrome, and it actually is running better than chrome in the memory department, which is good. but, it’s not good enough to make us switch back (running maybe 20% better in terms of memory). a tenfold difference would warrant a switch. in our instance, it was too little, too late.

but glad you are making improvements.

So that’s good, I guess?

I also like this comment from the aforementioned Slashdot thread!

Bug Counts

Here are the current bug counts.

  • P1: 24 (-3/+1)
  • P2: 131 (-8/+7)
  • P3: 69 (-1/+3)
  • Unprioritized: 3 (-3/+2)

That’s a net drop of two, largely because I went through and closed some P1 and P2 bugs that were stale or had been fixed elsewhere.

Notes on Reducing Firefox’s Memory Consumption

I gave a talk yesterday at the linux.conf.au Browser MiniConf, held in Ballarat, Australia.  Its title was “Notes On Reducing Firefox’s Memory Consumption”.

Below are the slides and notes in a SlideShare embedding. If you find that embedding problematic (some people do) you may prefer to download the PDF version directly.

MemShrink progress, week 9

Firefox 8 graduated to the Aurora channel this week, and the development period for what will become Firefox 9 began.  Lots of MemShrink activity happened this week, and I think all the changes listed below will make it into Firefox 8.

Avoiding Wasted Memory

I have blogged previously about memory wasted by “clownshoes” bugs.   Ed Morley found a webpage that resulted in 700MB of memory being wasted by the PLArena clownshoes bug.  Basically, on platforms where jemalloc is used (Windows, Linux), half the memory allocated by nsPresArena (which is built on top of PLArena) was wasted.  (On Mac the waste was 11%, because the Mac allocator rounds up less aggressively than jemalloc).

Fixing this problem properly for all PLArenas takes time because it requires changes to NSPR, so I made a spot-fix for the nsPresArena case.  This is a particularly big win on very large pages, but it saves around 3MB even on Gmail. This spot-fix has been granted beta approval and so will, barring disaster, make it into Firefox 7.

A Firefox Nightly user did some measurements with different browsers on the problematic page:

  • Firefox 8.0a1 before patch: 2.0 GB
  • Firefox 8.0a1 after patch: 1.3 GB
  • Latest Chrome canary build and dev (15.0.849.0): 1.1GB
  • Webkit2Process of Safari 5.1: 1.05 GB
  • Internet Explorer 9.0.2: 838 MB
  • Latest Opera Next 12.00: 727 MB

So this fix gets Firefox within spitting distance of other browsers, which is good!

In other developments related to avoiding wasted memory:

  • Luke Wagner discovered that, on typical websites, most JSScripts are byte-compiled but never run.  A JSScript roughly corresponds to a JavaScript function.  In hindsight, it’s not such a surprising result — Firefox byte-compiles all loaded JavaScript code, and you can imagine lots of websites use libraries like jQuery but only use a small fraction of the functions in the library.  Making byte-compilation lazy could potentially save MBs of memory per compartment.  But that will require some non-trivial reworking of the JS engine, and so is unlikely to happen in the short-term.
  • Kyle Huey avoided a small amount (~100KB per browser process) of waste due to rounding up in XPT arenas.

Improving about:memory

I made some progress on a Valgrind tool to help identify the memory that is currently reported only by the “heap-unclassified” bucket in about:memory.  It’s called “DMD”, short for “Dark Matter Detector”.  It’s in early stages and I still need to teach it about most of Firefox’s memory reporters, but it’s already spitting out useful data, which led to me and Ehsan Akhgari landing memory reporters for the JS atom table and the Hunspell spell checker.  We also now have some insight (here and here) about memory usage for very large pages.

Mounir Lamouri turned on the memory reporter for the DOM that he’s been working on for some time.  This shows up under “dom” in about:memory.  There are still some cases that require handling;  you can follow the progress of these here.

Andrew McCreight replaced about:memory’s buttons so you can force a cycle collection without also forcing a garbage collection, which may be useful in hunting down certain problems.

Finally, Sander van Veen added the existing “js-compartments-user” and “js-compartments-system” to the statistics collected by telemetry (his first landed patch!), and I did likewise for the “storage/sqlite” reporter.  I also added a new “tjit-data/trace-monitor” memory reporter that accounts for some of the memory used by TraceMonkey.

Miscellaneous

Igor Bukanov tweaked the handling of empty chunks by the JavaScript garbage collector.  That sounds boring until you see the results on Gregor Wagner’s 150-tab stress test: resident memory usage dropped 9.5% with all 150 tabs open, and dropped by 27% after all those tabs were closed.

Brian Hackett fixed a memory leak in type inference, which gets it one step closer to being ready to land.

Christian Höltje fixed a leak in his “It’s All Text” add-on that was causing zombie compartments.  This fix will be in version 1.6.0, which is currently awaiting to receive AMO approval, but can be obtained here in the meantime.  This fix and last week’s fix of a memory leak in LastPass are very encouraging — per-compartment reporters in about:memory have, for the first time, given add-on developers a reasonable tool for identifying memory leaks.  I hope we can continue to improve the situation here.  Several people have asked me for documentation on how to avoid memory leaks in add-ons.  I’m not the person to write that guide (I’m not a Gecko expert and I know almost nothing about add-ons) but hopefully someone else can step up to the plate.

Bug counts

Here’s the change in MemShrink bug counts.

  • P1: 30 (-0, +1)
  • P2: 64 (-4, +6)
  • P3: 36 (-5, +0)
  • Unprioritized: 1 (-2, +1)

Good progress on P3 bugs, but they’re the least important ones.  Other than that, new bugs are still being reported faster than they’re being fixed.  If you want to help but don’t know where to start, feel free to email me or ping me on IRC and I’ll do my best to help get you involved.

 

Clownshoes available in sizes 2^10+1 and up!

I’ve been working a lot on about:memory lately.  It’s a really useful tool, but one frustrating thing about it is that a decent chunk of our memory usage is not covered by any of the existing memory reporters, and so is falling into the “heap-unclassified” bucket:

This bucket typically accounts for 30–45% of about:memory’s “explicit” measurement.  We’ve discussed this “dark matter” several times in MemShrink meetings.  There’s lots of work underway to add new reporters to uninstrumented parts of Gecko, but I’ve seen cases where it looks like a decent chunk of the missing memory is due to the JS engine, which is already seemingly thoroughly instrumented.

This week I realized that some of the dark matter could be due to “slop” from jemalloc, the browser’s heap allocator.  What is slop?  When you ask a heap allocator for a certain number of bytes, it’ll often give you back more than you asked for, rounding the size up.  This wastes some memory, but there are good reasons for it — it makes the heap allocator much faster and simpler, and helps avoid fragmentation when the memory is freed.

The following comment from jemalloc.c shows jemalloc’s size classes.  Any request that’s not for one of the listed sizes below is rounded up to the nearest size.

*   |=====================================|
*   | Category | Subcategory    |    Size |
*   |=====================================|
*   | Small    | Tiny           |       2 |
*   |          |                |       4 |
*   |          |                |       8 |
*   |          |----------------+---------|
*   |          | Quantum-spaced |      16 |
*   |          |                |      32 |
*   |          |                |      48 |
*   |          |                |     ... |
*   |          |                |     480 |
*   |          |                |     496 |
*   |          |                |     512 |
*   |          |----------------+---------|
*   |          | Sub-page       |    1 kB |
*   |          |                |    2 kB |
*   |=====================================|
*   | Large                     |    4 kB |
*   |                           |    8 kB |
*   |                           |   12 kB |
*   |                           |     ... |
*   |                           | 1012 kB |
*   |                           | 1016 kB |
*   |                           | 1020 kB |
*   |=====================================|
*   | Huge                      |    1 MB |
*   |                           |    2 MB |
*   |                           |    3 MB |
*   |                           |     ... |
*   |=====================================|

In extreme cases, jemalloc will return almost double what you asked for.  For example, if you ask for 1,025 bytes, it’ll give you 2,048.  A lot of the time you have to just live with slop;  if you need to heap-allocate an object that’s 680 bytes, jemalloc will give you 1,024 bytes, and you just have to accept the 344 bytes of waste.  But if you have some flexibility in your request size, it’s a good idea to pick a size that’s a power-of-two, because that always gives you zero slop.  (Remember this, it’s important later on.)

So I instrumented jemalloc to print out an entry for every heap allocation, showing the requested amount, the actual allocated amount, and the resulting slop.  I then started up Firefox, opened Gmail, then shut down Firefox.  Next, I ran the resulting log through a wonderful little concordance-like script I have called “counts” which analyzes a file and tells you how many times each distinct line occurs.  Here’s the top ten lines of the output:

909062 numbers:
( 1 ) 205920 (22.7%, 22.7%): small:     24 ->     32 (     8 )
( 2 )  66162 ( 7.3%, 29.9%): small:     72 ->     80 (     8 )
( 3 )  61772 ( 6.8%, 36.7%): small:     40 ->     48 (     8 )
( 4 )  54386 ( 6.0%, 42.7%): small:   1056 ->   2048 (   992 )
( 5 )  48501 ( 5.3%, 48.0%): small:     18 ->     32 (    14 )
( 6 )  47668 ( 5.2%, 53.3%): small:     15 ->     16 (     1 )
( 7 )  24938 ( 2.7%, 56.0%): large:   4095 ->   4096 (     1 )
( 8 )  24278 ( 2.7%, 58.7%): small:     56 ->     64 (     8 )
( 9 )  13064 ( 1.4%, 60.1%): small:    104 ->    112 (     8 )
(10 )  12852 ( 1.4%, 61.6%): small:    136 ->    144 (     8 )

There were 909,062 lines, which means there were 909,062 allocations.  (I didn’t print information about frees.)  The most common line was “small: 24 -> 32 ( 8 )” which occurred 205,920 times, accounting for 22.7% of all the lines in the file.  The “small” refers to jemalloc’s class size categories (see above), and every allocation request of 24 bytes was rounded up to 32 bytes, resulting in 8 bytes of slop.

Looking through the list, I saw a case where 1,048,578 (2^20+2) bytes had been requested, and jemalloc had returned 2,097,152 (2^21) bytes.  That’s a huge (1MB) waste, and also smelled very fishy.  And on further inspection there were a lot of smaller but still suspicious cases where a number slightly larger than a power-of-two was being rounded up to the next power-of-two:  1,032 to 2048;  1,056 to 2,048;  2,087 to 4,096;  4,135 to 8,192, etc.

I investigated a number of these by adding some code to jemalloc to detect these suspicious request sizes, and then using Valgrind to print out a stack trace every time one occurred.  What I found was four distinct places in the codebase where the code in question had flexibility in the amount of memory it allocated, and so had tried to ask for a power-of-two, but had botched the request and thus ended up asking for slightly more than a power-of-two!

  • In js/src/vm/String.cpp, when concatenating two strings, the capacity of the resultant string is rounded up to the next power-of-two (unless the string is really big, in which case the rounding is less aggressive).  This is a reasonable thing to do, as it means that if a further concatenation occurs, there’s a good chance there’ll be space for the extra chars to be added in-line, avoiding an extra allocation and copy.  But the code first did its power-of-two round-up, and then added sizeof(jschar) (which is two bytes) to the size, to allow space for the terminating NULL char.  This code was responsible for the 1,048,578 byte request I mentioned earlier.  Luke Wagner fixed this bug with a minimal change earlier this week.  I was unable to easily measure the overall cost of this, but avoiding 1MB of slop with some frequency can only be a good thing.
  • nsprpub/lib/ds/plarena.c implements an arena allocator.  Each arena pool is given a size for each arena when it’s created, and the size passed in is almost always a power-of-two such as 1,024 or 4,096.  Which is good, except that the allocator then has to add a few more bytes onto the request because each arena has a header that holds book-keeping information.  When I start Firefox and load Gmail on my Linux64 box, I see that approximately 3MB of space is wasted because of this bug.  The fix is simple, it just requires that the arena payload be reduced by the size of the header;  my patch is awaiting review.  Sadly enough, this problem was identified 3.5 years ago but not fixed.
  • js/src/jsarena.cpp is almost identical to nsprpub/lib/ds/plarena.c and the story is the same:  it has the same problem; it was first identified 3.5 years ago but not fixed; and my patch is awaiting review.  This didn’t make much difference in practice because this problem had been separately identified for the particular JSArenaPool that allocates the most memory, and worked around using an ill-documented hack.
  • db/sqlite3/src/sqlite3.c is the best one of all.  SQLite is very careful about measuring its memory usage and provides an API to access those measurements.  But in order to measure its own memory usage accurately, it adds 8 bytes to every allocation request in order to store the requested size at the start of the resulting heap block.  In doing so, it converts a lot of 1,024 byte requests into 1,032 byte requests.  Ironically enough, the slop caused by storing the requested size rendered the stored size inaccurate.  I determined that as a result, SQLite is using roughly 1.15–1.20x more memory in Firefox than it thinks it is.  So, go look at the “storage/sqlite” number in about:memory and mentally adjust it accordingly.   (On my Mac laptop which has an old profile and 11 tabs open it’s currently 70MB, so there’s probably an additional 10–14MB of slop).  SQLite is an upstream product, and the authors are working on a fix.

Once all these fixes have landed, there will be two benefits.  First, Firefox’s memory usage will go down, which is good for everyone.  Second, the proportion of dark matter in about:memory will also drop, which makes our memory profiling more accurate and useful.

After that, converting existing reporters to use malloc_usable_size will account for much of the legitimate slop, further reducing the amount of dark matter. And then I have a master plan for identifying whatever’s left.

clownshoes

So what’s up with this post’s title?  Andreas Gal uses the term “clownshoes” to refer to code that is laughably bad.  When I found and filed the first of these bugs, in a fit of whimsy I put “[clownshoes]” in its whiteboard.  Only later did a realize how suitable this was: a clown’s shoes and these botched power-of-two-sized allocations both have lots of empty space in them, and thus make their owner less nimble than they could be.  Perfect.