MemShrink:P1 Bugs fixed
Terrence Cole made a change that allows unused arenas in the JS heap to be decommitted, which means they more or less don’t cost anything. This helps reduce the cost of JS heap fragmentation, which is a good short-term step while we are waiting for a compacting garbage collector to be implemented. Terrence followed it up by making the JS garbage collector do more aggressive collections when many decommitted arenas are present.
Justin Lebar enabled jemalloc on MacOS 10.7. This means that jemalloc is finally used on all supported versions of our major OSes: Windows, Mac, Linux and Android. Having a common heap allocator across these platforms is great for consistency of testing and behaviour, and makes future improvements involving jemalloc easier.
Other Bugs Fixed
I registered jemalloc with SQLite’s pluggable allocator interface. This had two benefits. First, it means that SQLite no longer needs to store the size of each allocation next to the allocation itself, avoiding some clownshoes allocations that wasted space. This reduces SQLite’s total memory usage by a few percent. Second, it makes the SQLite numbers in about:memory 100% accurate; previously SQLite was under-reporting its memory usage, sometimes significantly.
Peter Van der Beken fixed a cycle collector leak.
Jiten increased the amount of stuff that is released on memory pressure events, which are triggered when Firefox on Android moves to the background.
Finally, I created a meta-bug for tracking add-ons that are known to have memory leaks.
I accidentally deleted my record of the live bugs from last week, so I don’t have the +/- numbers for each priority this week.
- P1: 29 (last week: 35)
- P2: 126 (last week: 116)
- P3: 59 (last week: 55)
- Unprioritized: 0 (last week: 5)
The P1 result was great this week — six fewer than last week. Three of those were fixed, and three of those I downgraded to P2 because they’d been partially addressed.
For a longer view of things, here is a graph showing the MemShrink bug count since the project started in early June.
There was an early spike as many existing bugs were tagged with “MemShrink”, and a smaller spike in the middle when Marco Castellucio tagged a big pile of older bugs. Other than that, the count has marched steadily upward at the rate of about six per week. Many bugs are being fixed and definite improvements are being made, but this upward trend has been concerning me.
So in today’s MemShrink meeting we spent some time discussing future directions of MemShrink. Should we continue as is? Should we change our focus, e.g. by concentrating more on mobile, or setting some specific targets?
The discussion was long and wide-ranging and not easy to summarize. One topic was “what is the purpose of MemShrink?” The point being that memory usage is really a secondary measure. By and large, people don’t really care how many MB of memory Firefox is using; they care how responsive it is, and it’s just assumed that reducing memory usage will help with that. With that in mind, I’ll attempt to paraphrase and extrapolate some goals (apologies if I’ve misrepresented people’s opinions).
- On 64-bit desktop, the primary goal is that Firefox’s performance should not degrade after using it heavily (e.g. many tabs) for a long time. This means it shouldn’t page excessively, and that operations like garbage collection and cycle collection shouldn’t get slower and slower.
- On mobile, the primary goal probably is to reduce actual memory usage. This is because usage on mobile tends to be lighter (e.g. not many tabs) so the longer term issues are less important. However, Firefox will typically be killed by the OS if it takes up too much memory.
- On 32-bit desktop, both goals are relevant.
As for how these goals would change our process, it’s not entirely clear. For desktop, it would be great to have a benchmark that simulates a lot of browsing (opening and closing many sites and interacting with them in non-trivial ways). At the end we could measure various things, such a memory usage, garbage and cycle collection time, and we could set targets to reduce those. For mobile, the current MemShrink process probably doesn’t need to change that much, though more profiling on mobile devices would be good.
Personally, I’ve been spreading myself thinly over a lot of MemShrink bugs. In particular, I try to push them along and not let them stall by doing things like trying to reproduce them, asking questions, etc. I’ve been feeling lately like it would be a better use of my time to do less of that and instead dig deeply into a particular area. I thought about working on making JS script compilation lazy, but now I’ve decided instead to focus primarily on improving the measurements in about:memory, in particular, reducing the size of “heap-unclassified” by improving existing memory reporters and adding new ones. I’ve decided this because it’s an area where I have expertise, clear ideas on how to make progress, and tools to help me. Plus it’s important; we can’t make improvements without measurements, and about:memory is the best memory measurement tool we have. Hopefully other people agree that this is important to work on 🙂