Author Archives: Nicholas Nethercote

A big step towards generational and compacting GC

People frequently ask me for status updates on generational GC, and I usually say I’ll tell them when something notable happens. Well, something notable just happened: exact rooting landed.

What is exact rooting? In order to support generational and/or compacting GC, you need to be able to move GC-allocated things such as objects around. This means you can’t have raw C++ pointers to any objects that might move; instead, you need some kind of indirect pointer that can be updated when necessary.

Unfortunately, both the JS engine and Gecko have a lot of pointers to GC-allocated things. The process of checking and converting them has been the main part of a task called “exact rooting”, and that’s what just finished. This has required an enormous amount of what is essentially very tedious work. Jim Blandy summarized it nicely, as follows.

I’ve never heard of a major project escaping from conservative GC once it had entered that state of sin; nor have I heard of anyone implementing a moving collector after starting with a non-moving collector. So, doing *both* is impressive. I hope it pays off big!

Major kudos to Terrence Cole, Steve Fink, Jon Coppeard, Brian Hackett, and the small army of other helpers who did this. Now that they’ve finished eating this gigantic serving of vegetables, they can move onto dessert, i.e. making the GC generational and compacting.

YEINU: Your Experience Is Not Universal

I’ve lost count of the number of times I’ve heard statements like the following.

Firefox crashes ten times a day for me.  I don’t understand how anyone can use it.

There’s a simple answer: for most people it doesn’t crash ten times a day. But the person making the statement hasn’t realized that what links the first sentence to the second is an assumption — that other people’s experiences are the same. This is despite the fact that browsers are immensely complex, highly configurable, and used in many different ways on an enormous range of inputs.

(BTW, in case it’s unclear: I don’t think it’s ok for Firefox to crash ten times a day for anyone.)

My mental rebuttal to this kind of thinking is YEINU, short for “your experience is not universal”. It’s something of a past-tense dual to the well-known YMMV (“your mileage may vary”).

Although I’ve mostly thought about this in the context of browser development, it’s not hard to see how it relates to many facets of life. In particular, just this morning I was thinking about it in relation to this question on Quora and the general notion of privilege. Out of curiosity, I googled the exact phrase (using quotes), and while I got several hits on software forums (such as this and this, the latter being on a Mozilla forum), three of the five highest-ranked hits were from posts on feminist blogs (#1, #4, and #5). Interesting!

So, next time you’re puzzled by someone’s reaction to something, it might be worth considering if YEINU.

How I work on Mozilla code

Most Mozilla developers have their own particular set-ups and work-flows, and over time develop various scripts, shortcuts, and habits to make their lives easier.  But we rarely talk about them.

In this post I will describe various interesting aspects of how I work on Mozilla code, in the hope that (a) it will give other people useful ideas, and (b) other people will in turn give me useful ideas.

Machines

I have two machines:  a quite new and very fast Linux desktop machine, on which I do 99% of my coding work, and a two-year old Macbook Pro, on which I do little coding but a lot of browsing and other non-development stuff.  In theory my Linux desktop also has a Windows VM on it, though in practice that hasn’t happened yet!

I use Ubuntu on my Linux machine.  I don’t really enjoy sysadmin-type stuff, so I use the most widely-used, vanilla distribution available.  That way, if something goes wrong there’s a decent chance somebody else will have had the same problem.

Mercurial Repositories

I do most of my work on mozilla-inbound.  I clone that into a “master” repository,  that I leave untouched, called ws0. Then I have nine local clones of ws0, called ws1..ws9. I don’t have trouble filling the nine local clones because I usually have multiple different coding tasks in flight at any one time. Indeed, this is a necessity when dealing with the various latencies involved with development, such as compilation and local test runs (minutes), try server runs (hours), and reviews (days).

Mercurial

I use Mercurial queues and am quite content with them, though I am looking forward to changeset evolution becoming stable.  I tried git once but didn’t like it much;  the CLI is awful, the speed wasn’t noticeably better than Mercurial (and this was before I upgraded from Mercurial 1.8 to 2.7, which is much faster), and it was inconvenient to have to move patches over to a Mercurial repo before landing.

One problem I had with Mercurial queues was that I would often type hg qref patchname when I meant hg qnew patchname. This can lead to surprising and annoying mix-ups with patches, so I wrote a pre-hook for hg qref — if I give it an argument that isn’t -e or -u, I almost certainly meant hg qnew, so it aborts with a reminder. This has saved me some hassle on numerous occasions.

Some Mercurial extensions that I particularly like are color (colour output), progress (progress bars for slow operations), record (commit part of a change) and bzexport (upload patches to Bugzilla from the command line).

Try Server

Even though I do most of my work with mozilla-inbound, pushing to try from mozilla-inbound isn’t a great idea, because every so often you’ll catch some test breakage caused by someone else’s patch and then you have to work out if it’s your fault or was pre-existing.  So recently I took RyanVM’s advice and wrote a script that transplants the patch queue from a mozilla-inbound repo to a mozilla-central repo, and then pushes to try.  This avoids the test bustage problem, but occasionally the patches don’t apply cleanly.  In that case I just push from the mozilla-inbound repo and deal with the risk of test bustage.

Compiling

I mostly use Clang, though I also sometimes use GCC.  Clang is substantially faster than GCC, and its error messages are much better (though GCC’s have improved recently).  Clang’s generated code is slightly worse (~5–10% slower), but that’s not much of an issue for me while developing.  In fact, faster compilation time is important enough that my most common build configuration has the following line:

ac_add_options --enable-optimize='-O0'  # worse code, but faster builds

Last time I measured (which was before unified builds were introduced) this shaved a couple of minutes off build times, as compared to a vanilla --enable-optimize build.

Mozconfigs

I have a lot of mozconfig files.  I switch frequently between different kinds of builds, so much so that all my custom short-cut commands (e.g. for building and running the browser) have a mandatory argument that specifies the relevant mozconfig.  As a result, the mozconfig names I use are much shorter than the default names.  For example, the following is a selection of some of my more commonly-used mozconfigs for desktop Firefox.

  • d64: debug 64-bit build with clang
  • o64: optimized 64-bit build with clang
  • gd64: debug 64-bit build with GCC
  • go64: optimized 64-bit build with GCC
  • cd64: debug 64-bit build with clang and ccache
  • co64: optimized 64-bit build with clang and ccache
  • o64v: optimized 64-bit build with clang and –enable-valgrind
  • o64dmd: optimized 64-bit build with clang and –enable-dmd

Although I never do 32-bit browser builds, I do sometimes do 32-bit JS shell builds, so the ’64’ isn’t entirely redundant!

Building

I have a script called mmq that I use to build the browser.  I invoke it like this:

mmq o64

The argument is the mozconfig/objdir name.  This script invokes the build and redirects the output to an errors.err file (for use with quickfix, see below).  Once compilation finishes, the script also does some hacky grepping to reprint the first five compilation errors, if they exist, once compilation finishes.  I do this to make it easier to find the errors — sometimes they get swamped by the subsequent output.  (My use of Quickfix has made this feature less important than it once was, though it’s still a good thing to have.)

Profiles

I have multiple profiles.

  • default: used for my normal browsing.
  • dev: my standard development profile.
  • dev2: a second development profile, mostly used if I already am using the dev profile in another browser invocation.
  • e10s: a profile with Electrolysis enabled.

Starting Firefox

I have a script called ff with which I start Firefox like the following.

ff o64 dev

The first argument is the mozconfig, and the second is the profile.  Much of the time, this invokes Firefox in the usual way, e.g.:

o64/dist/bin/firefox -P dev -no-remote

However, this script knows about my mozconfigs and it automatically does more elaborate invocations when necessary, e.g. for DMD (which requires the setting of some environment variables).  I also wrote it so that I can tack on gdb as a third argument and it’ll run under GDB.

Virtual desktop and window layout

I use a 2×2 virtual desktop layout.

In each of the top-left and bottom-left screens I have three xterms — a full-height one on the left side in which I do editing, and two half-height ones on the right side in which I invoke builds, hg commands, and sundry other stuff.

In the top-right screen I have Firefox open for general use.

In the bottom-right screen I have a Chatzilla window open.

Text Editing

I use Vim.  This is largely due to path dependence;  it’s what they taught in my introductory programming classes at university, and I’ve happily used it ever since.  My setup isn’t particularly advanced, and most of it isn’t worth writing about.  However, I have done a few things that I think are worth mentioning.

Tags

Tags are fantastic — they let you jump immediately to the definition of a function/type/macro.  I use ctags and I have an alias for the following command.

ctags -R --langmap=C++:.c.h.cpp.idl --languages=C++ --exclude='*dist\/include*' --exclude='*[od]32/*' --exclude='*[od]64/*'

I have to re-run this command periodically to keep up with changes to the codebase.  Fortunately, it only takes about 5 seconds on my fast SSD.  (My old machine with a mechanical HD took much longer).  The coverage isn’t perfect but it’s good enough, and the specification of .idl files in the –langmap option was a recent tweak I made that improved coverage quite a bit.

Quickfix

I now use quickfix, which is a special mode to speed up the edit-compile-edit cycle.  The commands I use to build Firefox redirect the output to a special file, and if there are any compile errors, I use Vim’s quickfix command to quickly jump to their locations.  This saves enormous amounts of manual file and line traversal — I can’t recommend it highly enough.

In order to use quickfix you need to tell Vim what the syntax of the compile errors is.  I have the following command in my .vimrc for this.

" Multiple entries (separated by commas):
" - compile (%f:%l:%c) errors for different levels of file nesting
" - linker (%f:%l) errors for different levels of file nesting
set efm=../../../../../%f:%l:%c:\ error:\ %m,../../../../%f:%l:%c:\ error:\ %m,../../../%f:%l:%c:\ error:\ %m,../../%f:%l:%c:\ error:\ %m,../%f:%l:%c:\ error:\ %m,%f:%l:%c:\ error:\ %m,../../../../../%f:%l:\ error:\ %m,../../../../%f:%l:\ error:\ %m,../../../%f:%l:\ error:\ %m,../../%f:%l:\ error:\ %m,../%f:%l:\ error:\ %m,%f:%l:\ error:\ %m

This isn’t pretty, but it works well for Mozilla code.  Then it’s just a matter of doing :cf to load the new errors file (which also takes me to the first error) and then :cn/:cp to move forward and backward through the list.  Occasionally I get an error in a header file that is present in the objdir and the matching fails, and so I have to navigate to that file manually, but this is rare enough that I haven’t bothered trying to fix it properly.

One nice thing about quickfix is that it lets me start fixing errors before compilation has finished!  As soon as I see the first error message I can do :cf.  This means I have to re-do :cf and skip over the already-fixed errors if more errors occur later, but this is still often a win.

If you use Vim, work on Mozilla C++ code, and don’t use it, you should set it up right now.  There are many additional commands and options, but what I’ve written above is enough to get you started, and covers 95% of my usage.  (In case you’re curious, the :cnf, :cpf, :copen and :cclose commands cover the other 5%.)

:grep

I also set up Vim’s :grep command for use with Firefox.  I put the following script in ~/bin/.

#! /bin/sh
pattern=$1;
if [ -z "$pattern" ]; then
    echo "usage: $FUNCNAME <pattern>";
    return 1;
fi;
grep -n -r -s \
    --exclude-dir="*[od]32*" \
    --exclude-dir="*[od]64*" \
    --exclude-dir=".hg*" \
    --exclude-dir=".svn*" \
    --include="*.cpp" \
    --include="*.h" \
    --include="*.c" \
    --include="*.idl" \
    --include="*.html" \
    --include="*.xul" \
    --include="*.js" \
    --include="*.jsm" \
    "$pattern"

It does a recursive grep for a pattern through various kinds of source files, ignoring my objdirs and repository directories.  To use it, I just type “:grep <pattern>” and Vim runs the script and sends the results to quickfix, so I can again use :cn and :cp to navigate through the matches.  (Vim already knows about grep’s output format, so you don’t need to configure anything for that.)

I can also use that script from the command line, which is useful when I want to see all the matches at once, rather than stepping through them one at a time in Vim.

Trailing whitespace detection

This line in my .vimrc tells Vim to highlight any trailing whitespace in red.

highlight ExtraWhitespace ctermbg=red guibg=red
autocmd BufWinEnter *.{c,cc,cpp,h,py,js,idl} match ExtraWhitespace /\s\+$/
autocmd InsertEnter *.{c,cc,cpp,h,py,js,idl} match ExtraWhitespace /\s\+\%#\@<!$/
autocmd InsertLeave *.{c,cc,cpp,h,py,js,idl} match ExtraWhitespace /\s\+$/
autocmd BufWinLeave *.{c,cc,cpp,h,py,js,idl} call clearmatches()

[Update: I accidentally omitted the final four lines of this when I first published this post.]

The good thing about this is that I now never submit patches with trailing whitespace.  The bad thing is that I can see where other people have left behind trailing whitespace :)

Ctrl-P

Finally, I installed the Ctrl-P plugin, which aims to speed up the opening of files by letting you type in portions of the file’s path.  This is potentially quite useful for Mozilla code, where the directory nesting can get quite deep.  However, Ctrl-P has been less of a win than I hoped, for two reasons.

First, it’s quite slow, even on my very fast desktop with its very fast SSD.  While typing, there are often pauses of hundreds of milliseconds after each keystroke, which is annoying.

Second, I find it too eager to find matches. If you type a sequence of characters, it will match against any file that has those characters present in that order, no matter how many other characters separate them, and it will do so case-insensitively.  This might work well for some people, but if I’m opening a file such as content/base/src/nsContentUtils.cpp, I always end up typing just the filename in full.  By the time I’ve typed “nsContentU” ideally it would be down to the two files in the repository that match that exact string.  But instead I get the following.

> widget/tests/window_composition_text_querycontent.xul
> dom/interfaces/base/nsIQueryContentEventResult.idl
> dom/base/nsQueryContentEventResult.cpp
> dom/base/nsQueryContentEventResult.h
> content/base/public/nsIContentSecurityPolicy.idl
> content/base/public/nsContentCreatorFunctions.h
> dom/interfaces/base/nsIContentURIGrouper.idl
> content/base/public/nsContentPolicyUtils.h
> content/base/src/nsContentUtils.cpp
> content/base/public/nsContentUtils.h

I wish I could say “case-insensitive exact matches, please”, but I don’t think that’s possible.  As a result, I don’t use Ctrl-P that much, though it’s still occasionally useful if I want to open a file for which I know the name but not the path — it’s faster for that than dropping back to a shell and using find.

Conclusion

That’s everything of note that I can think of right now.  Please feel free to steal as many ideas as you wish from this post.

I haven’t given full details for a lot of the things I mentioned above. I’m happy to give more details (e.g. what various scripts look like) if people are interested.

Finally, I encourage other developers to write posts like this one explaining how they work on Mozilla code.  Thanks!

System-wide memory measurement for Firefox OS

Have you ever wondered exactly how all the physical memory in a Firefox OS device is used?   Wonder no more.  I just landed a system-wide memory reporter which works on any Firefox product running on a Linux system.  This includes desktop Firefox builds on Linux, Firefox for Android, and Firefox OS.

This memory reporter is a bit different to the existing ones, which work entirely within Mozilla processes.  The new reporter provides measurements for the entire system, including every user-space process (Mozilla or non-Mozilla) that is running.  It’s aimed primarily at profiling Firefox OS devices, because we have full control over the code running on those devices, and so it’s there that a system-wide view is most useful.

Here is some example output from a GeeksPhone Keon.

System
Other Measurements 
397.24 MB (100.0%) -- mem
├──215.41 MB (54.23%) ── free
├──105.72 MB (26.61%) -- processes
│  ├───57.59 MB (14.50%) -- process(/system/b2g/b2g, pid=709)
│  │   ├──42.29 MB (10.65%) -- anonymous
│  │   │  ├──42.25 MB (10.63%) -- outside-brk
│  │   │  │  ├──41.94 MB (10.56%) ── [rw-p] [69]
│  │   │  │  └───0.31 MB (00.08%) ++ (2 tiny)
│  │   │  └───0.05 MB (00.01%) ── brk-heap/[rw-p]
│  │   ├──13.03 MB (03.28%) -- shared-libraries
│  │   │  ├───8.39 MB (02.11%) -- libxul.so
│  │   │  │   ├──6.05 MB (01.52%) ── [r-xp]
│  │   │  │   └──2.34 MB (00.59%) ── [rw-p]
│  │   │  └───4.64 MB (01.17%) ++ (69 tiny)
│  │   └───2.27 MB (00.57%) ++ (2 tiny)
│  ├───21.73 MB (05.47%) -- process(/system/b2g/plugin-container, pid=756)
│  │   ├──12.49 MB (03.14%) -- anonymous
│  │   │  ├──12.48 MB (03.14%) -- outside-brk
│  │   │  │  ├──12.41 MB (03.12%) ── [rw-p] [30]
│  │   │  │  └───0.07 MB (00.02%) ++ (2 tiny)
│  │   │  └───0.02 MB (00.00%) ── brk-heap/[rw-p]
│  │   ├───8.88 MB (02.23%) -- shared-libraries
│  │   │   ├──7.33 MB (01.85%) -- libxul.so
│  │   │   │  ├──4.99 MB (01.26%) ── [r-xp]
│  │   │   │  └──2.34 MB (00.59%) ── [rw-p]
│  │   │   └──1.54 MB (00.39%) ++ (50 tiny)
│  │   └───0.36 MB (00.09%) ++ (2 tiny)
│  ├───14.08 MB (03.54%) -- process(/system/b2g/plugin-container, pid=836)
│  │   ├───7.53 MB (01.89%) -- shared-libraries
│  │   │   ├──6.02 MB (01.52%) ++ libxul.so
│  │   │   └──1.51 MB (00.38%) ++ (47 tiny)
│  │   ├───6.24 MB (01.57%) -- anonymous
│  │   │   ├──6.23 MB (01.57%) -- outside-brk
│  │   │   │  ├──6.23 MB (01.57%) ── [rw-p] [22]
│  │   │   │  └──0.00 MB (00.00%) ── [r--p]
│  │   │   └──0.01 MB (00.00%) ── brk-heap/[rw-p]
│  │   └───0.31 MB (00.08%) ++ (2 tiny)
│  └───12.32 MB (03.10%) ++ (23 tiny)
└───76.11 MB (19.16%) ── other

The data is obtained entirely from the operating system, specifically from /proc/meminfo and the /proc/<pid>/smaps files, which are files provided by the Linux kernel specifically for measuring memory consumption.

I wish that the mem entry at the top was the amount of physical memory available. Unfortunately there is no way to get that on a Linux system, and so it’s instead the MemTotal value from /proc/meminfo, which is “Total usable RAM (i.e. physical RAM minus a few reserved bits and the kernel binary code)”.  And if you’re wondering about the exact meaning of the other entries, as usual if you hover the cursor over an entry in about:memory you’ll get a tool-tip explaining what it means.

The measurements given for each process are the PSS (proportional set size) measurements.  These attribute any shared memory equally among all processes that share it, and so PSS is the only measurement that can be sensibly summed across processes (unlike “Size” or “RSS”, for example).

For each process there is a wealth of detail about static code and data.  (The above example only shows a tiny fraction of it, because a number of the sub-trees are collapsed.  If you were viewing it in about:memory, you could expand and collapse sub-trees to your heart’s content.)  Unfortunately, there is little information about anonymous mappings, which constitute much of the non-static memory consumption.  I have some patches that will add an extra level of detail there, distinguishing major regions such as the jemalloc heap, the JS GC heap, and JS JIT code.  For more detail than that, the existing per-process memory reports in about:memory can be consulted.  Unfortunately the new system-wide reporter cannot be sensibly combined with the existing per-process memory reporters because the latter are unaware of implicit sharing between processes.  (And note that the amount of implicit sharing is increased significantly by the new Nuwa process.)

Because this works with our existing memory reporting infrastructure, anyone already using the get_about_memory.py script with Firefox OS will automatically get these reports along with all the usual ones once they update their source code, and the system-wide reports can be loaded and viewed in about:memory as usual. On Firefox and Firefox for Android, you’ll need to set the memory.system_memory_reporter flag in about:config to enable it.

My hope is that this reporter will supplant most or all of the existing tools that are commonly used to understand system-wide memory consumption on Firefox OS devices, such as ps, top and procrank.  And there will certainly be other interesting, available OS-level measurements that are not currently obtained. For example, Jed Davis has plans to measure the pmem subsystem.  Please file a bug or email me if you have other suggestions for adding such measurements.

DMD now works on Windows

DMD is our tool for improving Firefox’s memory reporting.  It helps identify where new memory reporters need to be added in order to reduce the “heap-unclassified” value in about:memory.

DMD has always worked well on Linux, and moderately well on Mac (it is crashy for some people).  And it works on Android and B2G.  But it has never worked on Windows.

So I’m happy to report that DMD now does work on Windows, thanks to the excellent efforts of Catalin Iacob.  If you’re on Windows and you’ve been seeing high “heap-unclassified” values, and you’re able to build Firefox yourself, please give DMD a try.

MemShrink progress, final

I was due to write a MemShrink progress report today, but I’ve decided that after almost 2.5 years, my reserves of enthusiasm for these regular reports has been exhausted.  Sorry!

I do still plan to write posts when significant fixes relating to memory consumption are made.  (For example, when generational GC lands, you’ll hear about it here.)  I will also continue to periodically update the MemShrink “big ticket items” list.  And MemShrink meetings will continue, so MemShrink-tagged bugs will still be triaged.  And for those of you who read the weekly Platform meeting notes, I will continue to write MemShrink updates there.  So don’t despair — good things will continue to happen, but they’ll just be marginally less visible.

Premature Optimisation

I loved this sentence from Olin Shivers’ description of some Scheme history:

I fashionably decried premature optimisation in college without really understanding it until I once committed an act of premature opt so horrific that I can now tell when it is going to rain by the twinges I get in the residual scar tissue. Now I understand premature optimisation.

I’d love to know exactly what the premature optimisation was.

I also read Olin’s Dissertation Advice about fifty times in 2004.  Great stuff.

Libraries should permit custom allocators

Some C and C++ libraries permit the use of custom allocators, which are registered through some kind of external API.  For example, the following libraries used by Firefox provide this facility.

  • FreeType provides this via the FT_MemoryRec_ argument of the FT_New_Library() function.
  • ICU provides this via the u_setMemoryFunctions() function.
  • SQLite provides this via the sqlite3_config() function.

This gives the users of these libraries additional flexibility that can be very helpful.  For example, in Firefox we provide custom allocators that measure the size of all the live allocations done by the library;  these measurements are shown in about:memory.

In contrast, libraries that don’t allow custom allocator are very hard to account for in about:memory.  Such libraries are major contributors to the dreaded “heap-unclassified” value in about:memory.  These include Cairo and the WebRTC libraries.

Now, supporting custom allocators in a library takes some effort.  You have to be careful to always allocate in a fashion that will use the custom allocators if they have been registered.  Direct calls to vanilla allocation/free functions like malloc(), realloc(), and free() must be avoided.  For example, SpiderMonkey allows custom allocators (although Firefox doesn’t need to use that functionality), and I just fixed a handful of cases where it was accidentally using vanilla allocation/free functions.

But, it’s a very useful facility to provide, and I encourage all library writers to consider it.

MemShrink progress, week 121–124

It’s been a quiet but steady four weeks for MemShrink with 19 bugs fixed, including several leaks.

The only fix that I feel is worth highlighting is bug 918207, in which I added support for fast, coarse-grained measurement of a tab’s memory consumption.  The implemented machinery isn’t currently exposed through the UI, though there are two bugs open that will use it:  a simple one that will implement a command for the developer toolbar, and a more complex one that will implement a constantly-updating memory monitor widget for the devtools pane.

See you next time!

Warning for Firefox devs planning to upgrade to Ubuntu 13.10

I just upgraded from Ubuntu 13.04 to Ubuntu 13.10, and Firefox wouldn’t build with either clang or GCC.

clang was initially failing during configure, complaining about not being able to find joystick.h, though the underlying failure was an inability to find stddef.h.  This Ubuntu bug describes a workaround, which is to do the following.

cd /usr/lib/clang/3.2/
sudo ln -s /usr/lib/llvm-3.2/lib/clang/3.2/include

With that in place, I clobbered and rebuilt, and clang complained about a problem in allocator.h relating to a name __allocator_base, and GCC complained about C++11 support being insufficient.

Both failures had the same underlying cause, which is that both compilers are hardwired to look for some GCC-4.7 headers (which they shouldn’t) as well as GCC-4.8 headers.  I filed a bug with Ubuntu about this.

I worked around the problem just by renaming /usr/include/c++/4.7/ and /usr/include/x86_64-linux-gnu/c++/4.7/.  There may be more elegant workarounds, but that was good enough for me.