Author Archives: Nicholas Nethercote

Bleg for a new machine (part 2)

Last week I blegged for help in designing a new machine, and I got almost 50 extremely helpful comments and a handful of private emails.  Many thanks to all those who gave advice.

I mentioned that I want browser and JS shell builds to be fast, and that I want the machine to be quiet.  There were two other things that I didn’t mention, that affect my choices.

  • I’m not a hardware tinkerer type.  I don’t particularly enjoy setting up machines — I’m a programmer, not a sysadmin :)  I like vanilla configurations, so that problems are unlikely, and so that when they do occur there’s a good chance someone else has already had the same problem and found a solution.  So that’s a significant factor in my design.
  • I turn off my machine at night. And I use lots of repository clones (I have 10 copies of inbound present at all times), typically switching between two or three of them in one session.  So I stress the disk cache in ways that other people might not.

Here’s my latest configuration.  I don’t expect anything other than perhaps minor changes to this, though I’d still love to hear your thoughts.

  • CPU.  The Intel i7-4770.  I originally chose the i7-4770K, which is 0.1 GHz faster and is overclockable, but it lacks some of the newer CPU features such as support for virtualization and transactional memory.  Since I won’t overclock — as I said, I’m not the tinkerer type — several people suggested the i7-4770 would be better.
  • Motherboard. ASUS Z87-Plus.  I originally chose the ASUS Z87-C, but was advised that a board with an Intel NIC would be better.
  • Memory. 32 GiB of Kingston 2133 MHz RAM.  No change.
  • Disk. Samsung 840 Pro Series 512 GB.  No change. Multiple people said this was overkill — that 256 GB should be enough, or that the cheaper 840 EVO was almost as good.  But I’ll stick with it;  those disks have a really good reputation, it should last a long time, and I really like the idea of not having to worry about disk space, especially with two OSes installed. And apparently the performance of those drives diminishes once they get about 80% full, so having some excess capacity sounds good.
  • Graphics card.  Multiple people agreed that the Intel integrated graphics was powerful enough, and that the Intel driver situation on Linux is excellent, which is great — I don’t like mucking about with drivers!
  • Case. The Fractal Design Define R4 (Black) was recommended by two people.  It looks fantastic (my wife is in love with it already) and is reputedly very quiet.
  • Optical drive.  A Samsung DVD-RW drive. Unchanged.
  • Software. Several people suggested using Virtual Box instead of VMWare for my Windows VM.  I didn’t know about Virtual Box, so that was a good tip.  Someone also suggested I get Windows 7 Professional instead of Home Premium because the latter only supports 16 GiB of RAM.  Ugh, typical Microsoft segmented software offerings.
  • I didn’t mention monitor, keyboard and mouse because I’m happy with my current ones.

This looks like an excellent set-up for a single-CPU, quad-core machine.  However, multiple people suggested that I go for more cores, either by choosing 6-core or 8-core server CPUs, or using dual-sockets, or both.  I spent a lot of time investigating this option, and I considered several configurations, including a dual-socket machine with two Xeon E5-2630 CPUs (giving 12 cores and 24 threads) or a single-socket machine with an i7-3970X (giving 6 cores and 12 threads) or a Xeon E5-2660 (giving 8 cores and 16 threads).  But I have a mélange of concerns: (a) a more complex configuration (esp. dual-socket), (b) lack of integrated graphics, (c) higher power consumption, heat and noise, and (d) probably worse single-threaded performance.  These were enough that I have put it into the too-hard basket for now.

Ideally, I’d love to build two or three machines, benchmark them, and give all but one back.  Or, it would be nice Intel’s rumoured Haswell-E 8-core consumer machines were available now.

Still, daydreams aside, compared to my current machine, the above machine should give a nice speed bump (maybe 15–20% for CPU-bound operations, and who-knows-how-much for disk-bound operations), should be quieter, and will allow me to do Windows builds much more easily.

Thanks again to everyone who gave such good advice!  I promise that once I purchase and set up the new machine, I’ll blog about its performance compared to the old machine, so that any other Mozilla developers who want to get a new machine have some data points.

Bleg for a new machine

I last upgraded my desktop machine in June 2011.  This improved Firefox clobber build times from ~25 minutes to ~12 minutes, which was a huge productivity win.  Since then, build times have crept back up to, yep, ~25 minutes.  Time for a new machine.

I’m no hardware guru, so I’m asking for help.  I’m a Linux desktop user, and my main goal is to make compilation of Firefox fast.  I frequently build both Firefox and the JS shell, and it’s not unusual for me to have two (and occasionally three) builds running concurrently.  I’d also like for the machine to be quiet, as it will sit on top of my desk, not far from where I sit.

I have a quote for a machine from a company called CPL, who I’ve bought from before.  (Note that all prices on their website are in AUD.)  I gave them partial guidance on components (based on some very helpful advice from Naveed Ihsanullah), and here’s what they came up with.

  • CPU.  The Intel i7-4770K.  It is a 3.5 GHz CPU with 4 physical cores, 8 virtual cores, and an 8 MiB cache.  This is the fastest of the new Haswell CPUs that CPL has and so seems like a good choice.
  • Motherboard.  ASUS Z87-C.  Naveed said ASUS is a good brand.
  • Memory. 32 GiB of Kingston 2133 MHz RAM.  I currently have 16GiB, and while I don’t feel like that limits me at the moment, I might as well give myself some breathing room.
  • Disk. Samsung 840 Pro Series 512 GB.  Naveed recommended this specific model.  I currently have a magnetic disk, so I’m hoping to see some big performance improvements here (both while compiling and when doing things like grepping through the whole tree).  It’s interesting that this is easily the most expensive component;  SSDs still ain’t cheap.
  • Graphics card.  I currently do very little graphics-intensive stuff, other than look at the occasional WebGL demo.  Apparently the i7-4770K has a reasonably fast integrated graphics card, so I’m on the fence about whether I should get a separate graphics card.  If so, I guess it would be good to know if the driver situation on Linux is a happy one.
  • Case.  The Zalman MS800 Plus ATX Mid Tower.  This was entirely CPL’s suggestion, and I don’t much like the look of it.  It’s tall — about 10cm taller than my current case — and it also has an air-vent on top, whereas my current case doesn’t and so I’m able to put a printer on top of it.  On the plus side, it is pretty plain looking — no glowing lights or racing stripes — which I appreciate.
  • Optical drive.  A bog-standard Samsung DVD-RW drive.  I think I have the exact same one in my current machine, and it works fine.  I pretty much only use it to install OSes.
  • Software.  I use Ubuntu, and I want to install Windows 7 in a VM for the occasional times when I have some Windows-specific behaviour that I need to investigate.  (My current machine dual boots between Linux and Windows, and having to reboot is an enormous hassle.)  CPL has Windows 7 Home Premium 64-bit — I’m pretty sure I don’t want to deal with Windows 8 — but they don’t sell VMWare so I’ll need to get that somewhere else.

Overall, I feel pretty good about everything except the graphics card and the case.  Please let me know if you have any suggestions!  Thanks.

Using include-what-you-use

include-what-you-use (a.k.a. IWYU) is a clang tool that tells you which #include statements should be added and removed from a file.  Nicholas Cameron used it to speed up the building of gfx/layers by 12.5%.  I’ve also used it quite a bit within SpiderMonkey;  I’ve seen smaller build speed improvements but I’ve also been doing it in chunks over time.  Ms2ger started a tracking bug for all places where IWYU has been used in Mozilla code.

Ehsan asked for instructions on setting up IWYU.  There are official instructions, but I thought it might be helpful to document exactly what I did.

First, here is how I installed clang, based on Ehsan’s instructions.  I put the source code under $HOME/local/src and installed the build under $HOME/local.

  mkdir $HOME/local/src
  cd $HOME/local/src
  svn co http://llvm.org/svn/llvm-project/llvm/trunk llvm
  cd llvm/tools
  svn co http://llvm.org/svn/llvm-project/cfe/trunk clang
  cd ../..
  mkdir build
  cd build/
  ../configure --enable-optimized --disable-assertions --prefix=$HOME/local
  make
  sudo make install

Then I followed the “Building in-tree” instructions.  The first part is to get the IWYU code.

   cd $HOME/local/src/llvm/tools/clang/tools
   svn checkout http://include-what-you-use.googlecode.com/svn/trunk/ include-what-you-use

The second part was to do the following steps.

  • Edit tools/clang/tools/Makefile and add |include-what-you-use| to the DIRS variable.
  • Edit tools/clang/tools/CMakeLists.txt and add |add_subdirectory(include-what-you-use)|.
  • Re-build clang as per the above instructions.

After that, configure a Mozilla tree for a clang build and then run make with the following options: -j1 -k CXX=/home/njn/local/src/llvm/build/Release/bin/include-what-you-use. I’m not certain if the -j1 is necessary, but since IWYU spits out lots of output, it seemed wise. The -k tells make to keep building even after errors; for some reason, IWYU triggers compilation failure on every file it looks at.

Pipe the output to a file, and you’ll see lots of stuff like this.

../jsarray.h should add these lines:
#include <stdint.h>                     // for uint32_t
#include <sys/types.h>                  // for int32_t
#include "dist/include/js/RootingAPI.h"  // for HandleObject, Handle, etc
#include "jsapi.h"                      // for Value, HandleObject, etc
#include "jsfriendapi.h"                // for JSID_TO_ATOM
#include "jstypes.h"                    // for JSBool
#include "vm/String.h"                  // for JSAtom
namespace JS { class Value; }
namespace js { class ExclusiveContext; }
struct JSContext;

../jsarray.h should remove these lines:

The full include-list for ../jsarray.h:
#include <stdint.h>                     // for uint32_t
#include <sys/types.h>                  // for int32_t
#include "dist/include/js/RootingAPI.h"  // for HandleObject, Handle, etc
#include "jsapi.h"                      // for Value, HandleObject, etc
#include "jsfriendapi.h"                // for JSID_TO_ATOM
#include "jsobj.h"                      // for JSObject (ptr only), etc
#include "jspubtd.h"                    // for jsid
#include "jstypes.h"                    // for JSBool
#include "vm/String.h"                  // for JSAtom
namespace JS { class Value; }
namespace js { class ArrayObject; }  // lines 44-44
namespace js { class ExclusiveContext; }
struct JSContext;

I focused on addressing the “should remove these lines” #includes, and I did it manually. There’s also a script you can use to automatically do everything for you;  I don’t know how well it works.

Note that IWYU’s output is just plain wrong about 5% of the time — i.e. it says you can remove #includes that you clearly cannot.  (A lot of the time this seems to be because it hasn’t realized that a macro is needed.)  I also found that, while it produced output for all .cpp files, it only produced output for some of the .h files.  No idea why. Finally, it doesn’t know about local idioms; in particular, if you have platform-dependent code, its suggestions are often terrible because it only sees the files for the platform you are building on.

Good luck!

MemShrink progress, week 109–112

There’s been a lot of focus on B2G memory consumption in the past four weeks.  Indeed, of the 38 MemShrink bugs fixed in that time, a clear majority of them relate in some way to B2G.

In particular, Justin Lebar, Kyle Huey and Andrew McCreight have done a ton of important work tracking down leaks in both Gecko and Gaia.  Many of these have been reported by B2G partner companies doing stress testing such as opening and closing apps 100s or 1000s of times over long period.  Some examples (including three MemShrink P1s) are here, here, here, here, here, here, here and here.  There are still some P1s remaining (e.g. here, here, here).  This work is painstaking and requires lots of futzing around with low-level tools such as the GC/CC logs, unfortunately.

Relatedly, Justin modified the JS memory reporter to report “notable” strings, which includes smallish strings that are duplicated many times, a case that has occurred on B2G a couple of times.  Justin also moved some of the “heap-*” reports that previously lived in about:memory’s “Other measurements” section into the “explicit” tree.  This makes “explicit” closer to “resident” a lot of the time, which is a useful property.

Finally, Luke Wagner greatly reduced the peak memory usage seen during parsing large asm.js examples.  For the Unreal demo, this reduced the peak from 881MB to 6MB, and reduced start-up time by 1.5 seconds!  Luke also slightly reduced the size of JSScript, which is one of the very common structures on the JS GC heap, thus reducing pressure on the GC heap, which is always a good thing.

 

MemShrink progress, week 105–108

This is the first of the every-four-weeks MemShrink reports that I’m now doing.  The 21 bugs fixed in the past four weeks include 11 leak fixes, which is great, but I won’t bother describing them individually.  Especially when I have several other particularly impressive fixes to describe…

Image Handling

Back in March I described how Timothy Nikkel had greatly improved Firefox’s handling of image-heavy pages.  Unfortunately, the fix had to be disabled in Firefox 22 and Firefox 23 because it caused jerky scrolling on pages with lots of small images, such as Pinterest.

Happily, Timothy has now fixed those problems, and so his previous change has been re-enabled in Firefox 24.  This takes a big chunk out of the #1 item on the MemShrink big ticket items list.  Fantastic news!

Lazy Bytecode Generation

Brian Hackett finished implementing lazy bytecode generation.  This change means that JavaScript functions don’t have bytecode generated for them until they run.  Because lots of websites use libraries like jQuery, in practice a lot of JS functions are never run, and we’ve found this can reduce Firefox’s memory consumption by 5% or more on common workloads!  That’s a huge, general improvement.

Furthermore, it significantly reduces the number of things that are allocated on the GC heap (i.e. scripts, strings, objects and shapes that are created when bytecode for a function is generated).  This reduces pressure on the GC which makes it less likely we’ll have bad GC behaviour (e.g. pauses, or too much memory consumption) in cases where the GC heuristics aren’t optimal.

The completion of this finished off item #5 on the old Memshrink big ticket items list.  Great stuff.  This will be in Firefox 24.

Add-on Memory Reporting

Nils Maier implemented add-on memory reporting in about:memory.  Here’s some example output from my current session.

├───33,345,136 B (05.08%) -- add-ons
│   ├──18,818,336 B (02.87%) ++ {d10d0bf8-f5b5-c8b4-a8b2-2b9879e08c5d}
│   ├──11,830,424 B (01.80%) ++ {59c81df5-4b7a-477b-912d-4e0fdf64e5f2}
│   └───2,696,376 B (00.41%) ++ treestyletab@piro.sakura.ne.jp/js-non-window/zones/zone(0x7fbd7bf53800)

It’s obvious that Tree Style Tabs is taking up 2.7 MB.  What about the other two entries?  It’s not immediately obvious, but if I look in about:support at the “extensions” section I can see that they are AdBlock Plus and ChatZilla.

If you’re wondering why those add-ons are reported as hex strings, it’s due to a combination of the packaging of each individual add-on, and the fact that the memory reporting code is C++ and the add-on identification code is JS and there aren’t yet good APIs to communicate between the two.  (Yes, it’s not ideal and should be improved, but it’s a good start.)  Also, not all add-on memory is reported, just that in JS compartments;  old-style XUL add-ons in particular can have their memory consumption under-reported.

Despite the shortcomings, this is a big deal.  Users have been asking for this information for years, and we’ve finally got it.  (Admittedly, the fact that we’ve tamed add-on leaks makes it less important than it used to be, but it’s still cool.)  This will also be in Firefox 24.

b2g

Gregor Wagner has landed a nice collection of patches to help the Twitter and Notes+ apps on B2G.

While on the topic of B2G, in today’s MemShrink meeting we discussed the ongoing problem of slow memory leaks in the main B2G process.  Such leaks can cause the phone to crash or become flaky after its been running for hours or days or weeks, and they’re really painful to reproduce and diagnose.  Our partners are finding these leaks when doing multi-hour stress tests as part of their QA processes.  In contrast, Mozilla doesn’t really have any such testing, and as a result we are reacting, flat-footed, to external reports, rather than catching them early ourselves.  This is a big problem because users will rightly expect to have their phones run for weeks (or even months) without rebooting.

Those of us present at the meeting weren’t quite sure how we can improve our QA situation to look for these leaks.  I’d be interested to hear any suggestions.  Thanks!

MemShrink’s 2nd Birthday

June 14, 2013 is the second anniversary of the first MemShrink meeting.  This time last year I took the opportunity to write about the major achievements from MemShrink’s first year.  Unsurprisingly, since then we’ve been picking fruit from higher in the tree, so the advances have been less dramatic.  But it’s been 11 months since I last update the “big ticket items” list, so I will take this opportunity to do so, providing a partial summary of the past year at the same time.

The Old Big Ticket Items List

#5: Better Script Handling

This had two parts.  The first part was the sharing of immutable parts of scripts, which Till Schneidereit implemented.  It can save multiple MiBs of memory, particular if you have numerous tabs open from the same website.

The second part is lazy bytecode generation, which Brian Hackett has implemented and landed, though it hasn’t yet enabled.  Brian is working out the remaining kinks and hopes to land by the end of the current (v24) cycle.    Hopefully he will, because measurements have shown that it can reduce Firefox’s memory consumption by 5% or more on common workloads!  That’s a huge, general improvement.  Furthermore, it significantly reduces the number of things that are allocated on the GC heap (i.e. scripts, strings, objects and shapes that are created when bytecode for a function is generated).  This reduces pressure on the GC which makes it less likely we’ll have bad GC behaviour (e.g. pauses, or too much memory consumption) in cases where the GC heuristics aren’t optimal.

So the first part is done and the second is imminent, which is enough to cross this item off the list.  [Update:  Brian just enabled lazy bytecode on trunk!]

#4: Regain compartment-per-global losses

Bill McCloskey implemented zones, which restructured the heap to allow a certain amount of sharing between zones. This greatly reduced wasted space and reduced memory consumption in common cases by ~5%.

Some more could still be done for this issue.  In particular, it’s not clear if things have improved enough that many small JSMs can be used without wasting memory.  Nonetheless, things have improved enough that I’m happy to take this item off the list.

#3: Boot2Gecko

This item was about getting about:memory (or something similar) working on B2G, and using it to optimize memory.  This was done some time ago and the memory reporters (plus DMD) were enormously helpful in improving memory consumption.  Many of the fixes fell under the umbrella of Operation Slim Fast.

So I will remove this particular item from the list, but memory optimizations for B2G are far from over, as we’ll see below.

#2: Compacting Generational GC

See below.

#1: Better Foreground Tab Image Handling

See below.

The New Big Ticket Items List

#5: pdf.js

pdf.js was recently made the default way of opening PDFs in Firefox, replacing plug-ins such as Adobe Reader.  While this is great in a number of respects, such as security, it’s not as good for memory consumption, because pdf.js can use a lot of memory in at least some cases.  We need to investigate the known cases and improve things.

#4: Dev tools

While we’ve made great progress with memory profiling tools that help Firefox developers, the situation is not nearly as good for web developers.  Google Chrome has heap profiling tools for web developers, and Firefox should too.  (The design space for heap profilers is quite large and so Firefox shouldn’t just copy Chrome’s tools.)  Alexandre Poirot has started work on a promising prototype, though there is a lot of work remaining before any such prototype can make it into a release.

#3: B2G Nuwa

Cervantes Yu and Thinker Li have been working on Nuwa, which aims to give B2G a pre-initialized template process from which every subsequent process will be forked.  This might sound esoteric, but the important part is that it greatly increases the ability for B2G processes to share unchanging data.  In one test run, this increased the number of apps that could be run simultaneously from five to nine, which is obviously a big deal.  The downside is that getting it to work requires some extremely hairy fiddling with low-level code.  Fingers crossed it can be made to work reliably!

Beyond Nuwa, there is still plenty of other ways that B2G can have its memory consumption optimized, as you’d expect in an immature mobile OS.  Although I won’t list anything else in the big ticket items list, work will continue here, as per MemShrink’s primary aim: “MemShrink is a project that aims to reduce the memory consumption of Firefox (on desktop and mobile) and Firefox OS.”

#2: Compacting Generational GC

Generational GC will reduce fragmentation, reduce the working set size, and speed up collections.  Great progress has been made here — the enormous task of exactly rooting the JS engine and the browser is basically finished, helped along greatly by a static analysis implemented by Brian Hackett.  And Terrence Cole and others are well into converting the collector to be generational.  So we’re a lot closer than we were, but there is still some way to go.  So this item is steady at #2.

#1: Better Foreground Tab Image Handling

Firefox still uses much more memory than other browsers on image-heavy pages.  Fortunately, a great deal of progress has been made here.  Timothy Nikkel fixed things so that memory usage when scrolling through image-heavy pages is greatly reduced.  However, this change caused some jank on pages with lots of small images, so it’s currently disabled on the non-trunk branches.  Also, there is still a memory spike when memory-heavy pages are first loaded, which needs to be fixed before this item can be considered done.  So this item remains at #1.

Summary

Three items from the old list (#3, #4, #5) have been ticked off.  Two items remain (#1, #2) — albeit with good progress having been made — and keep their positions on the list.  Three items have been added to the new list (#3, #4, #5).
Let me know if I’ve omitted anything important!

MemShrink progress, week 103–104

I’m back from vacation.  Many thanks to Andrew McCreight for writing two MemShrink reports (here and here) while I was away, and to Justin Lebar for hosting them on his blog.

Fixes

areweslimyet.com proved its value again, identifying a 40 MiB regression relating to layer animations.  Joe Drew fixed the problem quickly.  Thanks, Joe!

Ben Turner changed the GC heuristics for web workers on B2G so that less garbage is allowed to accumulate.  This was a MemShrink:P1.

Bas Schouten fixed a gfx top-crasher that was caused by inappropriate handling of an out-of-memory condition.  This was also a MemShrink:P1.

Randell Jesup fixed a leak in WebRTC code.

Changes to these progress reports

I’ve written over sixty MemShrink progress reports in the past two years.  When I started writing them, I had two major goals.  First, to counter the notion that nobody at Mozilla cares about memory consumption — this was a common comment on tech blogs two years ago.  Second, to keep interested people informed of MemShrink progress.

At this point, I think the first goal has been comprehensively addressed.  (More due to the  fact that we actually have improved Firefox’s memory consumption than because I wrote a bunch of blog posts!)  As a result, I feel like the value of these posts has diminished.  I’m also tired of writing them;  long-time readers may have noticed that they are much terser than they used to be.

Therefore, going forward, I’m only going to post one report every four weeks.  (The MemShrink team will still meet every two weeks, however.)  I’ll also be more selective about what I write about — I’ll focus on the big improvements, and I won’t list every little leak that has been fixed, though I might post links to Bugzilla searches that list those fixes. Finally, I won’t bother recording the MemShrink bug counts.  (In fact, I’ve started that today.)  At this point it’s pretty clear that the P1 list hovers between 15 and 20 most of the time, and the P2 and P3 lists grow endlessly.

Apologies to anyone disappointed by this, but rest easy that it will give me more time to work on actual improvements to Firefox :)

On vacation until June 3rd

I’m about to go on vacation.  I won’t be back until June 3rd, and I will be checking email sporadically at best.

I look forward to a mountain of bugmail when I return.

Recent about:memory improvements

The landing of patches from two recent bugs has substantially changed the look of about:memory.  When you load the page all see now is the following.

about:memory screenshot

Measurements aren’t taken away;  you have to click on the “Measure” button for that to happen.  Also, adding ?verbose to the URL no longer has an effect.  If you want verbose output, you need to click on the “verbose” checkbox.

The motivation for this change was that about:memory’s functionality has expanded greatly over the past two years, and cramming more functionality into the existing UI was a bad idea.  There are numerous advantages to the new UI.

  • No need to control behaviour via the URL, which is admittedly an odd way to do it.
  • You can switch between verbose and non-verbose modes without having to reload the page.
  • All the buttons are at the top of the page instead of the bottom, so you don’t have to scroll down.
  • You can trigger a GC/CC/minimize memory usage event without having do a memory measurement.
  • When you save reports to file, it’s clearer that a new measurement will be taken, rather than saving any measurements that are currently displayed on the screen.
  • The buttons are grouped by function, which makes them easier to understand.

There’s also the “Load and diff…” button, which lets you easily find the difference between two saved memory report files.  Here’s some example output taken after closing a tab containing a Google search result.

about:memory diff output

You can see that all the measurements are negative.  The [-] markers on leaf nodes in the tree indicate that they are present in the first file but not in the second file.  Corresponding [+] markers are used for measurements present in the second file but not the first.

MemShrink progress, week 95–96

B2G Fixes

Kyle Huey made image.src='' discard the image immediately even if the image is not in the document.  This provides a way for image memory to be discarded immediately, which is important for some B2G apps such as the Gallery app.  This was a MemShrink:P1 bug.

Justin Lebar fixed a bad leak in AudioChannelAgent.

Mike Habicher fixed an image-related leak that was manifesting on B2G.

Other Fixes

I exposed the existing JSON memory report dumping functionality in about:memory.  As a result, gzipped JSON is now the preferred format for attaching memory report data to bugs.  This was a MemShrink:P1 bug.

Honza Bambas overhauled the DOM storage code.  Apparently this might reduce memory consumption, but I fully admit to not knowing the details.

Nicolas Silva fixed a leak in layers code relating to OMTC.

I removed the MEMORY_EXPLICIT telemetry measurement.  I didn’t want to do this, but it’s been causing hangs for some users, and those hangs are hard to avoid due to the complexity of reporting memory consumption from web workers.  Furthermore, in practice the measurement was sufficiently noisy that we’ve never gotten anything useful from it.

Help Needed

Mozilla code uses a mixture of fallible and infallible allocations.  In theory, any allocation that could allocate a large amount of memory (e.g. a few hundred KiB or more) should use a fallible allocator.  We’ve seen recently some cases where large allocations were being made infallibly, which led to OOM crashes.  Bug 862592 proposes adding an assertion to the infallible allocators that the request size isn’t too large.  This is an easy bug to get started with, if anyone is interested.  Details are in the bug.

Bug Counts

Here are the current bug counts.

  • P1: 15 (-4/+4)
  • P2: 145 (-0/+7)
  • P3: 131 (-2/+4)
  • Unprioritized: 5 (-1/+5)

Interregnum

I will be on vacation during May.  As a result, there will be no MemShrink reports for the next three fortnights.  (Although I’ll be away for just over four weeks, that period covers three MemShrink meetings.)  I’ll be back with the next report on June 11.  See you then!