about:memory Firefox Memory consumption MemShrink

MemShrink progress, week 113–116

It’s been a relatively quiet four weeks for MemShrink, with 17 bugs fixed.  (Relatedly, in today’s MemShrink meeting we only had to triage 10 bugs, which is the lowest we’ve had for ages.)  Among the fixed bugs were lots for B2G leaks and leak-like things, many of which are hard to explain, but are important for the phone’s stability.

Fabrice Desré made a couple of notable B2G non-leak fixes.

On desktop, Firefox users who view about:memory may notice that it now sometimes mentions more than one process.  This is due to the thumbnails child process, which generates the thumbnails seen on the new tab page, and which occasionally is spawned and runs briefly in the background.  about:memory copes with this child process ok, but the mechanism it uses is sub-optimal, and I’m planning to rewrite it to be nicer and scale better in the presence of multiple child processes, because that’s a direction we’re heading in.

Finally, some sad news:  Justin Lebar, whose name should be familiar to any regular reader of these MemShrink reports, has left Mozilla.  Justin was a core MemShrink-er from the very beginning, and contributed greatly to the success of the project.  Thanks, Justin, and best of luck in the future!

14 replies on “MemShrink progress, week 113–116”

AWSY is a useful tool, but it’s a single measurement of a single benchmark on a single machine. Don’t read too much into it. Our memory situation is vastly better than it was in the Fx4 era.

But even if AWSY is not a really reliable method, I do see a permanent increase of memory over releases. Current nightly is constantly using 700M, with gmail, ABP and some 15-ish tabs.
And it gets worse over time (development time, not runtime)

So is it just creeping featureism or some other reason?

Still being worked on. Nothing new to report. When there is something to report, I will report it.

If you’re interested in non-memshrink news about it; in AWFY tests the GGC version of the javascript engine is now performing at a similar overall speed level as the old non-GGC version did. There’re still large variations between individual tests within the benchmarks; but some of that is probably inevitable. The increased work shuffling of objects between generations will end up hurting some tests; while others will benefit from having their entire working set in the youngest generation because this keeps the objects closer together in memory allowing for better cache utilization. That said some of the slow tests have individually gotten better which I assume reflects tweaking the generational behavior to avoid pathological cases.

I think there should be an AWSY tab for the stable channel, to see (and show) how does memory management change in an end user’s perspective over time.

Hi Nic,
Do you know when us nightly users will be able to enable GGC? Or do you have an update on the progress of it?

FWIW, Komodo has recently faked up a child memory reporter (we don’t really do interesting plugins, so plugin-container doesn’t exist). Seems to work kinda okay; the JS code is definitely all weird though. That, and for some strange reason multi memory reporters still expect all calls to be synchronous, which means trying to do IPC can be risky… 🙂

(Pretty sure I actually has is rather inaccurate, but it’s still miles better than completely missing, I think.)

Is there a technical reason the thumbnail generation was made into a separate process rather than a thread? I always thought Firefox was trying to move to a multi-threaded architecture, but now it seems like they’re aiming at multi-process scheme more similar to Chrome instead?

The thumbnails generation apparently causes less jank (i.e. stuttering) when it is in a different process.

More generally, Firefox is already heavily multi-threaded. And we’re gradually moving towards a multi-process model. This was attempted a couple of years ago in the “Electrolysis” project, which failed, mostly due to difficulty with keeping add-ons working. But that project has been resurrected and is currently being worked on. No-one is sure yet what the end result look like, but I do expect more processes.

Is there anything about where it’s going publicly available? Other than Brendon Eich’s “maybe by the end of the year” pronouncement back in January I haven’t heard anything about it.

I haven’t heard anything about it’s status. I expect we’ll hear when significant milestones are reached. It’s a big, difficult project, so it could be a while.

Comments are closed.