Clawing Our Way Back To Precision

JavaScript Memory Usage

Web pages and applications these days are complex and memory-hungry things. A JavaScript VM has to handle megabytes and often gigabytes of objects being created and, sometime later, forgotten. It is critically important for performance to manage memory effectively, by reusing space that is no longer needed rather than requesting more from the OS, and avoiding wasted gaps between the objects that are in use.

This article describes the internal details of what the SpiderMonkey engine is doing in order to move from a conservative mark-and-sweep collector to a compacting generational collector. By doing so, we hope to improve performance by reducing paging and fragmentation, improving cache locality, and shaving off some allocation overhead.

Conservative Garbage Collection

Since June of 2010, Spidermonkey has used a conservative garbage collector for memory management. When memory is getting tight, or any of various other heuristics say it’s time, we stop and run a garbage collection (GC) pass to find any unneeded objects so that we can free them and release their space for other uses. To do this, we need to identify all of the live objects (so then everything left over is garbage.) The JSAPI (SpiderMonkey’s JavaScript engine API) provides a mechanism for declaring a limited set of objects are live. Those objects, and anything reachable from those objects, must not be discarded during a garbage collection (GC). However, it is common for other objects to be live yet not reachable from the JSAPI-specified set — for example, a newly-created object that will be inserted into another object as a property, but is still in a local variable when a GC is triggered. The conservative collector scans the CPU registers and stack for anything that looks like a pointer to the heap managed by the garbage collector (GC). Anything found is marked as being live and added to the preexisting set of known-live pointers using a fairly standard incremental mark-and-sweep collection.

Conservative garbage collection is great. It costs nothing to the main program — the “mutator”1 in GC-speak — and makes for very clean and efficient pointer handling in the embedding API. An embedder can store pointers to objects managed by the Spidermonkey garbage collector (called “GC things“) in local variables or pass them through parameters and everything will be just fine.

Consider, for example, the following code using SpiderMonkey’s C++ API :

  JSObject *obj1 = JS_NewObject(cx);
  JSObject *obj2 = JS_NewObject(cx);
  JS_DefineProperty(obj2, "myprop", obj1);

In the absence of conservative scanning, this would be a serious error: if the second call to JS_NewObject() happens to trigger a collection, then how is the collector to know that obj1 is still needed? It only appears on the stack (or perhaps in a register), and is not reachable from anywhere else. Thus, obj1 would appear to be dead and would get recycled. The JS_SetProperty() call would then operate on an invalid object, and the bad guys would use this to take over your computer and force it to work as a slave in a global password-breaking collective. With conservative scanning, the garbage collector will see obj1 sitting on the stack (or in a register), and prevent it from being swept away with the real trash.

Exact Rooting

The alternative to conservative scanning is called “exact rooting”, where the coder is responsible for registering all live pointers with the garbage collector. An example would look like this:

  JSObject *obj1 = JS_NewObject(cx);
  JS_AddObjectRoot(cx, obj1);
  JSObject *obj2 = JS_NewObject(cx);
  JS_DefineProperty(obj2, "myprop", obj1);
  JS_RemoveObjectRoot(cx, obj1);

This, to put it mildly, is a big pain. It’s annoying and error-prone — notice that forgetting to remove a root (e.g. during error handling) is a memory leak. Forgetting to add a root may lead to using invalid memory, and is often exploitable. Language implementations with automatic memory management often start out using exact rooting. At some point, all the careful rooting code gets to be so painful to maintain that a conservative scanner is introduced. The embedding API gets dramatically simpler, everyone cheers and breathes a sigh of relief, and work moves on (at a quicker pace!)

Sadly, conservative collection has a fatal flaw: GC things cannot move. If something moved, you would need to update all pointers to that thing — but the conservative scanner can only tell you what might be a pointer, not what definitely is a pointer. Your program might be using an integer whose value happens to look like a pointer to a GC thing that is being moved, and updating that “pointer” will corrupt the value. Or you might have a string whose ASCII or UTF8 encoded values happen to look like a pointer, and updating that pointer will rewrite the string into a devastating personal insult.

So why do we care? Well, being able to move objects is a prerequisite for many advanced memory management facilities such as compacting and generational garbage collection. These techniques can produce dramatic improvements in collection pause times, allocation performance, and memory usage. SpiderMonkey would really like to be able to make use of these techniques. Unfortunately, after years of working with a conservative collector, we have accumulated a large body of code both inside SpiderMonkey and in the Gecko embedding that assumes a conservative scanner: pointers to GC things are littered all over the stack, tucked away in foreign data structures (i.e., data structures not managed by the GC), used as keys in hashtables, tagged with metadata in their lower bits, etc.

Handle API for Exact Rooting

Over a year ago, the SpiderMonkey team began an effort to claw our way back to an exact rooting scheme. The basic approach is to add a layer of indirection to all GC thing pointer uses: rather than passing around direct pointers to movable GC things, we pass around pointers to GC thing pointers (“double pointers”, as in JSObject**). These Handles, implemented with a C++ Handle<T> template type, introduce a small cost to accesses since we now have to dereference twice to get to the actual data rather than once. But it also means that we can update the GC thing pointers at will, and all subsequent accesses will go to the right place automatically.

In addition, we added a “Rooted<T>” template that uses C++ RAII to automatically register GC thing pointers on the stack with the garbage collector upon creation^n. By depending on LIFO (stack-like) ordering, we can implement this with only a few store instructions per local variable definition. A Rooted pointer is really just a regular pointer sitting on the stack, except it will be registered with the garbage collector when its scope is entered and unregistered when its scope terminates. Using Rooted, the equivalent of the code at the top would look like:

  Rooted<JSObject*> obj1(cx, JS_NewObject(cx));
  Rooted<JSObject*> obj2(cx, JS_NewObject(cx));
  JS_DefineProperty(obj2, "myprop", obj1);

Is it as clean as the original? No. But that’s the cost of doing exact rooting in a language like C++. And it really isn’t bad, at least most of the time. To complete the example, note that a Rooted<T> can be automatically converted to a Handle<T>. So now let’s consider

  bool somefunc(JSContext *cx, JSObject *obj) {
    JSObject *obj2 = JS_NewObject(cx);
    return JS_GetFlavor(cx, obj) == JS_GetFlavor(cx, obj2);

(The JS_GetFlavor function is fictional.) This function is completely unsafe with a moving GC and would need to be changed to

  bool somefunc(JSContext *cx, Handle<JSObject*> obj) {
    Rooted<JSObject*> obj2(cx, JS_NewObject(cx));
    return JS_GetFlavor(cx, obj) == JS_GetFlavor(cx, obj2);

Now ‘obj’ here is a double pointer, pointing to some Rooted<JSObject*> value. If JS_NewObject happens to GC and change the address of *obj, then the final line will still work fine because JS_GetFlavor takes the Handle<JSObject*> and passes it through, and within JS_GetFlavor it will do a double-dereference to retrieve the object’s “flavor” (whatever that might mean) from its new location.

One added wrinkle to keep in mind with moving GC is that you must register every GC thing pointer with the garbage collector subsystem. With a non-moving collector, as long as you have at least one pointer to every live object, you’re good — the garbage collector won’t treat the object as garbage, so all of those pointers will remain valid. But with a moving collector, that’s not enough: if you have two pointers to a GC thing and only register one with the GC, then the GC will only update one of those pointers for you. The other pointer will continue to point to the old location, and Bad Things will happen.

For details of the stack rooting API, see js/public/RootingAPI.h. Also note that the JIT needs to handle moving GC pointers as well, and obviously cannot depend on the static type system. It manually keeps track of movable, collectible pointers and has a mechanism for either updating or recompiling code with “burned in” pointers.

Analyses and Progress

Static types

We initially started by setting up a C++ type hierarchy for GC pointers, so that any failure to root would be caught by the compiler. For example, changing functions to take Handle<JSObject*> will cause compilation to fail if any caller tries to pass in a raw (unrooted) JSObject*. Sadly, the type system is simply not expressive enough for this to be a complete solution, so we require additional tools.

Dynamic stack rooting analysis

One such tool is a dynamic analysis: at any point that might possibly GC, we can use the conservative stack scanning machinery to walk up the stack, find all pointers into the JS heap, and check whether each such pointer is properly rooted. If it is not, the conservative scanner mangles the pointer so that it points to an invalid memory location. It can’t just report a failure, because many stack locations contain stale data that is already dead at the time the GC is called (or was never live in the first place, perhaps because it was padding or only used in a different branch). If anything attempts to use the poisoned pointer, it will most likely crash.

The dynamic analysis has false positives — for example, when you store an integer on the stack that, when interpreted as a pointer, just happens to point into the JS heap — and false negatives, where incorrect code is never executed during our test suite so the analysis never finds it. It also isn’t very good for measuring progress towards the full exact rooting goal: each test crashes or doesn’t, and a crash could be from any number of flaws. It’s also a
slow cycle for going through hazards one by one.

Static stack rooting analysis

For these reasons, we also have a static analysis2 to report all occurrences of unrooted pointers across function calls that could trigger a collection. These can be counted, grouped by file or type, and graphed over time. The static analysis still finds some false positives, e.g. when a function pointer is called (the analysis assumes that any such call may GC unless told otherwise.) It can also miss some hazards, resulting in false negatives, for example when a pointer to a GC pointer is passed around and accessed after a GC-ing function call. For that specific class of errors, the static analysis produces a secondary report of code where the address of an unrooted GC pointer flows into something that could GC. This analysis is more conservative than the other (i.e., it has more false positives; cases where it reports a problem that would not affect anything in practice) because it is easily possible for the pointed-to pointer to not really be live across a GC. For example, a simple GC pointer outparam where the initial value is never used could fit in this category:

  Value v;
  JS_GetSomething(cx, &v);
  if (...some use of v...) { ... }

The address of an unrooted GC thing v is being taken, and if JS_GetSomething triggers a GC, it appears that it could be modified and used after the JS_GetSomething call returns. Except that on entry to JS_GetSomething, v is uninitialized, so we’re probably safe. But not necessarily — if JS_GetSomething invokes GC twice, once after setting v through its pointer, then this really is a hazard. The poor static analysis, which takes into account very limited data flow, cannot know.

The static analysis gathers a variety of information that is independently useful to implementers, which we have exposed both as static files and through an IRC bot3. The bot watches for changes in the number of hazards to detect regressions, and also so we can cheer on improvements. You can ask it whether a given function might trigger a garbage collection, and if so, it will respond with a potential call chain from that function to GC. The bot also gives links to the detailed list of rooting hazards and descriptions of each.

Currently, only a small handful of reported hazards remain that are not known to be false positives. In fact, as of June 25, 2013 an exact-rooted build of the full desktop browser (with the conservative scanner disabled) passes all tests!

It is very rare for a language implementation to return to exact rooting after enjoying the convenience of a conservative garbage collector. It’s a lot of work, a lot of tricky code, and many infuriatingly difficult bugs to track down. (GC bugs are inordinately sensitive to their environment, timing, and how long it has been since you’ve washed your socks. They also manifest themselves as crashes far removed from the actual bug.)

Generational Garbage Collection

In parallel with the exact rooting effort, we4 began implementing a generational garbage collector (“GGC”) to take advantage of our new ability to move GC things around in memory. (GGC depends on exact rooting, but the work can proceed independently.) At a basic level, GGC divides the JS heap into separate regions. Most new objects are allocated into the nursery, and the majority of these objects will be garbage by the time the garbage collector is invoked5. Nursery objects that survive long enough are transferred into the tenured storage area. Garbage collections are categorized as “minor GCs”, where garbage is only swept out of the nursery, and “major GCs”, where you take a longer pause to scan for garbage throughout the heap. Most of the time, it is expected that a minor GC will free up enough memory for the mutator to continue on doing its work. The result is faster allocation, since most allocations just append to the nursery; less fragmentation, because most fragmentation is due to short-lived objects forming gaps between their longer-lived brethren; and reduced pause times and frequency, since the minor GCs are quicker and the major GCs are less common.

Current Status

GGC is currently implemented and capable of running the JS shell. Although we have just begun tuning it, it is visible in our collection of benchmarks on the GGC line at At the present time, GGC gives a nice speedup on some individual benchmarks, and a substantial slowdown on others.

Write Barriers

The whole generational collection scheme depends on minor GCs being fast, which is accomplished by keeping track of all external pointers into the nursery. A minor GC only needs to scan through those pointers plus the pointers within the nursery itself in order to figure out what to keep, but in order to accomplish this, the external pointer set must be accurately maintained at all time.

The mutator maintains the set of external pointers by calling write barriers on any modification of a GC thing pointer in the heap. If there is an object o somewhere in the heap and the mutator sets a property on o to point to another GC thing, then the write barrier will check if that new pointer is a pointer into the nursery. If so, it records it in a store buffer. A minor GC scans through the store buffer to update its collection of pointers from outside the nursery to within it.

Write barriers are an additional implementation task required for GGC. We again use the C++ type system to enforce the barriers in most cases — see HeapPtr<T> for inside SpiderMonkey, Heap<T> for outside. Any writes to a Heap<T> will invoke the appropriate write barrier code. Unsurprisingly, many other cases are not so simply solved.

Barrier Verifier

We have a separate dynamic analysis to support the write barriers. It empties out the nursery with a minor GC to provide a clean slate, then runs the mutator for a while to populate the store buffers. Finally, it scans the entire tenured heap to find all pointers into the nursery, verifying that all such pointers are captured in the store buffer.

Remaining Tasks

While GGC already works in the shell, work remains before it can be used in the full browser. There are many places where a post write barrier is not straightforward, and we are manually fixing those. Fuzz tests are still finding bugs, so those need to be tracked down. Finally, we won’t turn this stuff on until it’s a net performance win, and we have just started to work through the various performance faults. The early results look promising.


Much of the foundational work was done by Brian Hackett, with Bill McCloskey helping with the integration with the existing GC code. The heavy lifting for the exact stack rooting was done by Terrence Cole and Jon Coppeard, assisted by Steve Fink and Nicholas Nethercote and a large group of JS and Gecko developers. Boris Zbarsky and David Zbarsky in particular did the bulk of the Gecko rooting work, with Tom Schuster and man of mystery Ms2ger pitching in for the SpiderMonkey and XPConnect rooting. Terrence did the core work on the generational collector, with Jon implementing much of the Gecko write barriers. Brian implemented the static rooting analysis, with Steve automating it and exposing its results. A special thanks to the official project masseuse, who… wait a minute. We don’t have a project masseuse! We’ve been cheated!


  1. From the standpoint of the garbage collector, the only purpose of the main program (you know, the one that does all the work) is to ask for new stuff to be allocated in the heap and to modify the pointers between those heap objects. As a result, it is referred to as the “mutator” in GC literature. (That said, the goal of a garbage collector is to pause the mutator as briefly and for as little time as possible.)
  2. Brian Hackett implemented the static analysis based on his sixgill framework. It operates as a GCC plugin that monitors all compilation and sends off the control flow graph to a server that records everything in a set of databases, from which a series of Javascript programs pull out the data and analyze it for rooting hazards.
  3. The bot is called “mrgiggles”, for no good reason, and currently only hangs out in #jsapi on /msg mrgiggles help for usage instructions.
  4. “We” in this case mostly means Terrence Cole, with guidance from Bill McCloskey.
  5. Referred to in the literature by the grim phrase “high infant mortality.” The idea that the bulk of objects in most common workloads die quickly is called the “generational hypothesis”.

The Baseline Compiler Has Landed

This wednesday we landed the baseline compiler on Firefox nightly. After six months of work from start to finish, we are finally able to merge the fruits of our toils into the main release stream.

What Is The Baseline Compiler?

Baseline (no, there is no *Monkey codename for this one) is IonMonkey’s new warm-up compiler. It brings performance improvements in the short term, and opportunities for new performance improvements in the long term. It opens the door for discarding JaegerMonkey, which will enable us to make other changes that greatly reduce the memory usage of SpiderMonkey. It makes it easier and faster to implement first-tier optimizations for new language features, and to more easily enhance those into higher-tier optimizations in IonMonkey.

Our scores on the Kraken, Sunspider, and Octane benchmarks have improved by 5-10% on landing, and will continue to improve as we continue to leverage Baseline to make SpiderMonkey better. See the AreWeFastYet website. You can select regions on the graph (by clicking and dragging) to zoom in on them.

Another JIT? Why Another JIT?

Until now, Firefox has used two JITs: JaegerMonkey and IonMonkey. Jaeger is a general purpose JIT that is “pretty fast”, and Ion is a powerful optimizing JIT that’s “really fast”. Initially, hot code gets compiled with Jaeger, and then if it gets really hot, recompiled with Ion. This strategy lets us gradually optimize code, so that the really heavyweight compilation is used for the really hot code. However, the success of this strategy depends on striking a good balance between the time-spent-compiling at different tiers of compilation, and the the actual performance improvements delivered at each tier.

To make a long story short, we’re currently using JaegerMonkey as a stopgap baseline compiler for IonMonkey, and it was not designed for that job. Ion needs a baseline compiler designed with Ion in mind, and that’s what Baseline is.

The fuller explanation, as always, is more nuanced. I’ll go over that in three sections: the way it works in the current release, why that’s a problem, and how Baseline helps fix it.

The Current Reality

In a nutshell, here’s how current release Firefox approaches JIT compilation:

  1. All JavaScript functions start out executing in the interpreter. The interpreter is really slow, but it collects type information for use by the JITs.
  2. When a function gets somewhat hot, it gets compiled with JaegerMonkey. Jaeger uses the collected type information to optimize the generated jitcode.
  3. The function executes using the Jaeger jitcode. When it gets really hot, it is re-compiled with IonMonkey. IonMonkey’s compiler spends a lot more time than JaegerMonkey, generating really optimized jitcode.
  4. If type information for a function changes, then any existing JITcode (both Jaeger’s and Ion’s) is thrown away, the function returns to being interpreted, and we go through the whole JIT lifecycle again.

There are good reasons why SpiderMonkey’s JIT compilation strategy is structured this way.

You see, Ion takes a really long time to compile, because to generate extremely optimized jitcode, it applied lots of heavyweight optimization techniques. This meant that if we Ion-compiled functions too early, type information was more likely to change after compilation, and Ion code would get invalidated a lot. This would cause the engine to waste a whole lot of time on compiles that would be discarded. However, if waited too long to compile, then we would spend way too much time interpreting a function before compiling it.

JaegerMonkey’s JIT compiler is not nearly as time consuming as IonMonkey’s JIT compiler. Jaeger uses collected type information to optimize codegeneration, but it doesn’t spend nearly as much time as Ion in optimizing its generated code. It generates “pretty good” jitcode, but does it way faster than Ion.

So Jaeger was stuck in between the interpreter and Ion, and performance improved because the really hot code would still get Ion-compiled and be really fast, and the somewhat-hot code would get compiled with Jaeger (and recompiled often as type-info changed, but that was OK because Jaeger was faster at compiling).

This approach ensured that SpiderMonkey spent as little time as possible in the interpreter, where performance goes to die, while still gaining the benefits of Ion’s codegeneration for really hot JavaScript code. So all is well, right?

No. No it is not.

The Problems

The above approach, while a great initial compromise, still posed several significant issues:

  1. Neither JaegerMonkey nor IonMonkey could collect type information, and they generated jitcode that relied on type information. They would run for as long as the type information associated with the jitcode was stable. If that changed, the jitcode would be invalidated, and execution would go back to the interpreter to collect more type information.
  2. Jaeger and Ion’s calling conventions were different. Jaeger used the heap-allocated interpreter stack directly, whereas Ion used the (much faster) native C stack. This made calls between Jaeger and Ion code very expensive.
  3. The type information collected by the interpreter was limited in certain ways. The existing Type-Inference (TI) system captured some kinds of type information very well (e.g. the types of values you could expect to see from a property read at a given location in the code), but other kinds of information very poorly (e.g. the shapes of the objects that that the property was being retreived from). This limited the kinds of optimizations Ion could do.
  4. The TI infrastructure required (and still requires) a lot of extra memory to persistently track type analysis information. Brian Hackett, who originally designed and implemented TI, figured he could greatly reduce that memory overhead for Ion, but it would be much more time consuming to do for Jaeger.
  5. A lot of web-code doesn’t run hot enough for even the Jaeger compilation phase to kick in. Jaeger took less time than Ion to compile, but it was still expensive, and the resulting code could always be invalidated by type information changes. Because of this, the threshold for Jaeger compilation was still set pretty high, and a lot of non-hot code still ran in the interpreter. For example, SpiderMonkey lagged on the SunSpider benchmark largely because of this issue.
  6. Jaeger is just really complex and hard to work with.

The Solution

The Baseline compiler was designed to address these shortcomings. Like the interpreter, Baseline jitcode feeds information to the existing TI engine, while additionally collecting even more information by using inline cache (IC) chains. The IC chains that Baseline jitcode creates as it runs can be inspected by Ion and used to better optimize Ion jitcode. Baseline jitcode never becomes invalid, and never requires recompilation. It tracks and reacts to dynamic changes, adding new stubs to its IC chains as necessary. Baseline’s native compilation and optimized IC stubs also allows it to run 10x-100x faster than the interpreter. Baseline also follows Ion’s calling conventions, and uses the C stack instead of the interpreter stack. Finally, the design of the baseline compiler is much simpler than either JaegerMonkey or IonMonkey, and it shares a lot of common code with IonMonkey (e.g. the assembler, jitcode containers, linkers, trampolines, etc.). It’s also really easy to extend Baseline to collect new type information, or to optimize for new cases.

In effect, Baseline offers a better compromise between the interpreter and a JIT. Like the interpreter, it’s stable and resilient in the face of dynamic code, collects type information to feed to higher-tier JITs, and is easy to update to handle new features. But as a JIT, it optimizes for common cases, offering an order of magnitude speed up over the interpreter.

Where Do We Go From Here?

There are a handful of significant, major changes that Baseline will enable, and are things to watch for in the upcoming year:

  • Significant memory savings by reducing type-inference memory.
  • Performance improvements due to changes in type-inference enabling better optimization of inlined functions.
  • Further integration of IonMonkey and Baseline, leading to better performance for highly polymorphic object-manipulating code.
  • Better optimization of high-level features like getters/setters, proxies, and generators

Also, to remark on recent happenings… given the recent flurry of news surrounding asm.js and OdinMonkey, there have been concerns raised (by important voices) about high-level JavaScript becoming a lesser citizen of the optimization landscape. I hope that in some small way, this landing and ongoing work will serve as a convincing reminder that the JS team cares and will continue to care about making high-level, highly-dynamic JavaScript as fast as we can.


Baseline was developed by Jan De Mooij and myself, with significant contributions by Tom Schuster and Brian Hackett. Development was greatly helped by our awesome fuzz testers Christian Holler and Gary Kwong.

And of course it must be noted Baseline by itself would not serve a purpose. The fantastic work done by the IonMonkey team, and the rest of the JS team provides a reason for Baseline’s existence.

Support for debugging SpiderMonkey with GDB now landed

The main source tree for Firefox, Mozilla Central, now includes some code that should make debugging SpiderMonkey with GDB on Linux much more pleasant and productive.

GDB understands C++ types and values, but naturally it has no idea what the debuggee intends them to represent. As a result, GDB’s attempts to display those values can sometimes be worse than printing nothing at all. For example, here’s how a stock GDB displays a stack frame for a call to js::baseops::SetPropertyHelper:

(gdb) frame
#0 js::baseops::SetPropertyHelper (cx=0xc648b0, obj={<js::HandleBase> = {}, ptr = 0x7fffffffc960}, receiver={<js::HandleBase> = {}, ptr = 0x7fffffffc960}, id={<js::HandleBase> = {}, ptr = 0x7fffffffc1e0}, defineHow=4, vp={<js::MutableHandleBase> = {<js::MutableValueOperations<JS::MutableHandle >> = {<js::ValueOperations<JS::MutableHandle >> = {}, }, }, ptr = 0x7fffffffc1f0}, strict=0) at /home/jimb/moz/dbg/js/src/jsobj.cpp:3593

There exist people who can pick through that, but for me it’s just a pile of hexadecimal noise. And yet, if you persevere, what those arguments represent is quite simple: in this case, obj and receiver are both the JavaScript global object; id is the identifier "x"; and vp refers to the JavaScript string value "foo". This SetPropertyHelper call is simply storing "foo" as the value of the global variable x. But it sure is hard to tell—and that’s an annoyance for SpiderMonkey developers.

As of early December, Mozilla Central includes Python scripts for GDB that define custom printers for SpiderMonkey types, so that when GDB comes across a SpiderMonkey type like MutableValueHandle, it can print it in a meaningful way. With these changes, GDB displays the stack frame shown above like this:

(gdb) frame
#0 js::baseops::SetPropertyHelper (cx=0xc648b0, obj=(JSObject * const) 0x7ffff151f060 [object global] delegate, receiver=(JSObject * const) 0x7ffff151f060 [object global] delegate, id=$jsid("x"), defineHow=4, vp=$jsval("foo"), strict=0) at /home/jimb/moz/dbg/js/src/jsobj.cpp:3593

Here it’s much easier to see what’s going on. Objects print with their class, like “global” or “Object”; strings print as strings; jsval values print as appropriate for their tags; and so on. (The line breaks could still be improved, but that’s GDB for you.)

Naturally, the pretty-printers work with any command in GDB that displays values: print, backtrace, display, and so on. Each type requires custom Python code to decode it; at present we have pretty-printers for JSObject, JSString, jsval, the various Rooted and Handle types, and things derived from those. The list will grow.

GDB picks up the SpiderMonkey support scripts automatically when you’re debugging the JavaScript shell, as directed by the file that the build system places in the same directory as the js executable. We haven’t yet made the support scripts load automatically when debugging Firefox, as that’s a much larger audience of developers, and we’d like a chance to shake out bugs before foisting it on everyone.

Some versions of GDB are patched to trust only auto-load files found in directories you’ve whitelisted; if this is the case for you, GDB will complain, and you’ll need to add a command like the following to your ~/.gdbinit file:

# Tell GDB to trust auto-load files found under ~/moz.
add-auto-load-safe-path ~/moz

If you need to see a value in its plain C++ form, with no pretty-printing applied, you can add the /r format modifier to the print command:

(gdb) print vp
$1 = $jsval("foo")
(gdb) print/r vp
$2 = {
  <js::MutableHandleBase> = {
    <js::MutableValueOperations<JS::MutableHandle >> = {
      <js::ValueOperations<JS::MutableHandle >> = {}, }, },
  members of JS::MutableHandle:
  ptr = 0x7fffffffc1f0

If you run into troubles with a pretty-printer, please file a bug in Bugzilla, under the “Core” product, for the “JavaScript Engine” component. (The pretty-printers have their own unit and regression tests; we do want them to be reliable.) In the mean time, you can disable the pretty-printers with the disable pretty-printer command:

(gdb) disable pretty-printer .* SpiderMonkey
12 printers disabled
128 of 140 printers enabled

For more details, see the GDB support directory’s README file.

AreWeFastYet Improvements

I’m pleased to announce that we have rebooted AreWeFastYet with a whole
new set of features! The big ones:

  • The graphs now display a hybrid view that contains full history, as well as recent checkins.
  • You can select areas of the graph to zoom in and get a detailed view.
  • Tooltips can be pinned and moved around to make comparing easier.
  • Tooltips now have more information, like revision ranges and changelogs.
  • The site is now much faster as it is almost entirely client-side.

You can check it out at

About AreWeFastYet

AreWeFastYet is the JavaScript Team’s tool for automatically monitoring
JavaScript performance. It helps us spot regressions, and it provides a
clear view of where to drive benchmark performance work.

It began as a demotivational joke during Firefox 4 development. It
originally just said “No”. Once it got graphs, though, it became a
strong motivator. The goal was very clear: we had to make the lines
cross, and with each performance checkin we could see ourselves edge closer.

After the Firefox 4 release AWFY took on an additional role. Since it
ran every 30 minutes, we could easily spot performance regressions.
However this had the unexpected and unfortunate side effect of taking
away the long-term view of JavaScript performance.

The new version of AWFY is designed to address that problem, as well as
fix numerous usability issues.


Previously AWFY was written in PHP and used expensive server-side
database queries. Now the website is entirely client-side,
self-contained in static HTML, JavaScript, and JSON. Since there is a
large amount of data, the JSON is divided into small files based on the
granularity of information needed. As you zoom in, new data sets at the
required detail are fetched asynchronously.

The JSON is updated every 15 minutes, since it takes about that long to
re-process all the old data.

The new source code is all available on GitHub:

I would like to give a HUGE thank you to John Schoenick, who made Many of AWFY’s problems had been solved and
implemented in AWSY, and John was extremely helpful in explaining them
as well as helping with the actual HTML and CSS.

The Ins and Outs of Invalidation

One of the primary goals of JIT engines for dynamic languages is to make high-level code run fast by compiling it into efficient machine code.  A key property of this, however, is that the compiled code for any piece of JavaScript must behave exactly the same at runtime as the interpreted code for that JavaScript would have behaved.  In a nutshell, providing efficiency while maintaining correctness is the fundamental problem faced by JIT compilers.

This efficiency is tricky to achieve because dynamic languages like JavaScript are highly polymorphic: any variable can hold values of any type, and in the general case, the runtime cannot know ahead of time what type a particular value will have. For example, even if the program will always add two integers together at a particular place in the code, the engine has to allow for the possibility that a non-integer value might unexpectedly show up.

There are two broad techniques which can be used to deal with this challenge.  The first is guarding, where the engine generates machine code that checks its assumptions before executing the code that relies on those assumptions.  For the adding example above, the engine will generate machine code to do integer addition, but it will make sure to prefix that code with checks which ensure that the two values being added are actually integers.  If the checks fail, the machine code jumps to a slowpath that handles the uncommon case in a general way.  If the check succeeds, the fastpath (or optimized) machine code is executed.  The expectation is that the checks will succeed most of the time, and so the fastpath will be executed most of the time.

While guarding is a useful technique, it imposes a performance penalty.  Checking assumptions takes time, and having those checks in hot code can be a significant performance drain.  The second technique for emitting efficient code while retaining correctness is invalidation.  That’s what I’m going to talk about in this post.  I’m going to use SpiderMonkey’s newly added JIT engine, IonMonkey, to illustrate my examples, but aside from the details the overall strategy will apply to most JIT engines.

Invalidation at 10,000 Feet

The key idea behind invalidation is this: instead of making our jitcode check an assumption every time we are about to do something that relies on that assumption, we ask the engine to mark the jitcode as invalid when the assumption becomes false.  As long as we take care never to run jitcode that’s been marked invalid, then we can go ahead and leave out the guards we would have otherwise added.

Let’s take a look at a simple example to get started.  Here’s a small JavaScript program that computes the sum of some point distances:

function Point(x, y) {
    this.x = x;
    this.y = y;
function dist(pt1, pt2) {
    var xd = pt1.x - pt2.x,
        yd = pt1.y - pt2.y;
    return Math.sqrt(xd*xd + yd*yd);
function main() {
    var totalDist = 0;
    var origin = new Point(0, 0); 
    for (var i = 0; i < 20000; i++) {
        totalDist += dist(new Point(i, i), origin);
    // The following "eval" is just there to prevent IonMonkey from
    // compiling main().  Functions containing calls to eval don't
    // currently get compiled by Ion.
    return totalDist;

When this script is run, the dist function is compiled by IonMonkey after roughly 10,000 iterations, generating the following intermediate representation (IR) code:

[1]   Parameter 0          => Value     // (pt1 value)
[2]   Parameter 1          => Value     // (pt2 value)
[3]   Unbox [1]            => Object    // (unbox pt1 to object) 
[4]   Unbox [2]            => Object    // (unbox pt2 to object)
[5]   LoadFixedSlot(x) [3] => Int32     // (read pt1.x, unbox to Int32)
[6]   LoadFixedSlot(x) [4] => Int32     // (read pt2.x, unbox to Int32)
[7]   Sub [5] [6]          => Int32     // xd = (pt1.x - pt2.x)
[8]   LoadFixedSlot(y) [3] => Int32     // (read pt1.y, unbox to Int32)
[9]   LoadFixedSlot(y) [4] => Int32     // (read pt2.y, unbox to Int32)
[10]  Sub [8] [9]          => Int32     // yd = (pt1.y - pt2.y)
[11]  Mul [7] [7]          => Int32     // (xd*xd)
[12]  Mul [10] [10]        => Int32     // (yd*yd)
[13]  Add [11] [12]        => Int32     // ((xd*xd) + (yd*yd))
[14]  Sqrt [13]            => Double    // Math.sqrt(...)
[15]  Return [14]

The above is a cleaned-up version of the actual IR for illustration. For the actual nitty-gritty low-level IR, Click Here.

In instructions [3] and [4], Ion blindly unboxes the argument values into Object pointers without checking to see if they are actually object pointers and not primitives. In instructions [5], [6], [8], and [9], Ion blindly converts the x and y values loaded from the objects to Int32s.

(Note: Unboxing in this case refers to decoding a generic JavaScript value into a raw value of a specific type such as Int32, Double, Object pointer, or Boolean).

If we were only using guards to check our assumptions, the following extra checks would have gone into this code:

  1. Two type checks: An extra check before each of [3] and [4] to check that the input argument was actually an object pointer before unboxing it.
  2. Four type checks: Ensuring that pt1.x, pt2.x, pt1.y, and pt2.y were all Int32 values before unboxing them.

Instead, the IR code skips these six extra checks, generating extremely tight code. Normally, we wouldn’t be able to do this because the jitcode would simply be wrong if any of these assumptions were not held. If the function dist is ever called with non-object arguments, the jitcode would crash. If dist was ever called with instances of Point with x and y values that were not Int32s, the jitcode would crash.

To get away with generating this efficient code while maintaining correctness, we must make sure that we never run it after its assumptions become invalid. Accomplishing that requires leveraging SpiderMonkey’s type inference system.

Type Inference

SpiderMonkey’s type inference (TI) dynamically tracks known type information for JavaScript objects and scripts, and how those types flow through different parts of the code. The data structures maintained and updated by TI represent the current type knowledge about the program. By tracking when these structures change, the engine can trigger events whenever particular type assumptions are broken.

The following is a very simplified diagram of the type model generated for our program as it runs.

The TypeScript at the top keeps track of the type information for the function dist – such as the type sets associated with the first and second arguments (‘pt1’ and ‘pt2’).

The TypeObject diagrammed below keeps track of the type information associated with instances of Point – such as the type sets associated with the fields ‘x’ and ‘y’.

These structures will be updated by TI as needed. For example, if dist is ever called with an object that is not an instance of Point, TI will update the appropriate argument typeset on the TypeScript. Likewise, if Point is ever instantiated with values for ‘x’ or ‘y’ that are not integers, the parameter type sets on the TypeObject will be updated.

When Ion JIT-compiles a function and makes implicit assumptions about types, it adds invalidation hooks to the appropriate type sets, like so:

Whenever new types are added to type sets, the relevant invalidation hook will be triggered, which will cause the jitcode to be marked invalid.

Experiments With Ion

Let’s see if we can experimentally trigger these kinds of invalidations by changing the code above. I’ll be using the standalone JavaScript shell built from the Mozilla sources to run these examples. You can verify these tests yourself by building a debug version of the JS shell. (See the SpiderMonkey build documentation for more information).

First, let’s start easy – and verify that no invalidations occur with the script above:

$ IONFLAGS=osi js-debug points.js
[Invalidate] Start invalidation.
[Invalidate]  No IonScript invalidation.

(ignore the spurious, extra “Start invalidation” in this and subsequent outputs – it’s due to garbage-collection starting, which makes the engine check for potential invalidations caused by the GC. It’s not relevant to this post)

Passing the environment variable IONFLAGS=osi just asks Ion to dump all invalidation related events to its output as it runs. As the output notes, this program causes no invalidations – since no type assumptions are broken after compilation.

Stab 2

For our second try, let’s break the assumption that all arguments to dist are instances of Point, and see what happens. Here is the modified main() function:

function main() {
    var totalDist = 0;
    var origin = new Point(0, 0); 
    for (var i = 0; i < 20000; i++) {
        totalDist += dist(new Point(i, i), origin);
    dist({x:3,y:9}, origin); /** NEW! **/
    // The following "eval" is just there to prevent IonMonkey from
    // compiling main().  Functions containing calls to eval don't
    // currently get compiled by Ion.
    return totalDist;

In this program, the loop is going to run and make dist hot enough that it gets compiled with the assumption that its arguments are always going to be instances of Point. Once the loop is completed, we call dist again, but with its first argument being an object which is not an instance of Point, which breaks the assumption made by the jitcode.

$ IONFLAGS=osi js-debug points.js
[Invalidate] Start invalidation.
[Invalidate]  No IonScript invalidation.
[Invalidate] Start invalidation.
[Invalidate]  Invalidate points.js:6, IonScript 0x1f7e430

(once again, the first “Start invalidation” is just caused by GC kicking in, and it’s not relevant to this post.)

Aha! We’ve managed to trigger invalidation of dist (points.js, line 6). Life is good.
The diagram below shows what’s happening. The dist TypeScript’s first-argument-typeset changes, adding a reference to a new TypeObject. This triggers its invalidation hook, and marks the IonScript invalid:

Stab 3

For our third stab, we break the assumption that fields of Point instances always contain Int32 values. Here is the modified main() function:

function main() {
    var totalDist = 0;
    var origin = new Point(0, 0); 
    for (var i = 0; i < 20000; i++) {
        totalDist += dist(new Point(i, i), origin);
    dist(new Point(1.1, 5), origin); /** NEW! **/
    // The following "eval" is just there to prevent IonMonkey from
    // compiling main().  Functions containing calls to eval don't
    // currently get compiled by Ion.
    return totalDist;

And the associated output:

$ IONFLAGS=osi js-debug points.js
[Invalidate] Start invalidation.
[Invalidate]  No IonScript invalidation.
[Invalidate] Start invalidation.
[Invalidate]  Invalidate points.js:6, IonScript 0x1f7e430

And the diagram showing how the script is invalidated:

Invalidation vs. Guarding

Invalidation and guarding are two distinct approaches to generating jitcode for any given operation. Within the jitcode for any given function, both techniques may be used, and neither one is strictly better than the other in all cases.

The advantage of invalidation-based optimization is that guards can be omitted, allowing very hot code to execute extremely quickly. However, there are also downsides. When an assumption fails, every piece of jitcode that relies on that assumption must be marked invalid, and prevented from being executed. This means that the function returns to running via the interpreter, until such a time as we compile it again with a new set of assumptions. If we have the bad luck of choosing invalidation-based optimization on an assumption that happens to break, then we’re suddenly stuck with a very high cost.

With guard-based optimization, we sacrifice the speed of the optimize code, but the resulting code is more resilient to changes in assumptions – we don’t have to throw away jitcode when assumptions break. In those cases, the code will just execute a slowpath. If the number of times a function is called with invalid assumptions is small (but greater than zero), then guard-based optimization might be a better choice than invalidation-based optimization.

The question of which optimization strategy to pick when is not always easily answered. Even if an assumption is invalidated occasionally, the jitcode which assumes it may be hot enough to make it worthwhile to use invalidation-based optimization, and just pay the cost of the occasional recompile – the time saved in between will be worth the recompile cost. Alternatively, it may not be worth the cost. Beyond a certain point, deciding between the two approaches becomes a matter of heuristics and tuning.

On-Stack Invalidation

The post so far gives a general idea of the main principles behind invalidations. That said, there is a very important corner case with invalidation that all JIT engines need to take care of.

The corner case is called on-stack invalidation, and it arises because sometimes, the jitcode that becomes invalid is code that’s on the call stack (i.e. it’s one of the callers of the code that’s currently executing). This means that once the current code finishes running, it’s going to execute a machine-level return to jitcode that we can no longer safely execute.

Handling this case requires a bit of juggling, and consideration for a number of subtle errors that may arise. I’ll cover more on that in a follow-up post.

IonMonkey in Firefox 18

Today we enabled IonMonkey, our newest JavaScript JIT, in Firefox 18. IonMonkey is a huge step forward for our JavaScript performance and our compiler architecture. But also, it’s been a highly focused, year-long project on behalf of the IonMonkey team, and we’re super excited to see it land.

SpiderMonkey has a storied history of just-in-time compilers. Throughout all of them, however, we’ve been missing a key component you’d find in typical production compilers, like for Java or C++. The old TraceMonkey*, and newer JägerMonkey, both had a fairly direct translation from JavaScript to machine code. There was no middle step. There was no way for the compilers to take a step back, look at the translation results, and optimize them further.

IonMonkey provides a brand new architecture that allows us to do just that. It essentially has three steps:

  1. Translate JavaScript to an intermediate representation (IR).
  2. Run various algorithms to optimize the IR.
  3. Translate the final IR to machine code.

We’re excited about this not just for performance and maintainability, but also for making future JavaScript compiler research much easier. It’s now possible to write an optimization algorithm, plug it into the pipeline, and see what it does.


With that said, what exactly does IonMonkey do to our current benchmark scores? IonMonkey is targeted at long-running applications (we fall back to JägerMonkey for very short ones). I ran the Kraken and Google V8 benchmarks on my desktop (a Mac Pro running Windows 7 Professional). On the Kraken benchmark, Firefox 17 runs in 2602ms, whereas Firefox 18 runs in 1921ms, making for roughly a 26% performance improvement. For the graph, I converted these times to runs per minute, so higher is better:

On Google’s V8 benchmark, Firefox 15 gets a score of 8474, and Firefox 17 gets a score of 9511. Firefox 18, however, gets a score of 10188, making it 7% faster than Firefox 17, and 20% faster than Firefox 15.

We still have a long way to go: over the next few months, now with our fancy new architecture in place, we’ll continue to hammer on major benchmarks and real-world applications.

The Team

For us, one of the coolest aspects of IonMonkey is that it was a highly-coordinated team effort. Around June of 2011, we created a somewhat detailed project plan and estimated it would take about a year. We started off with four interns – Andrew Drake, Ryan Pearl, Andy Scheff, and Hannes Verschore – each implementing critical components of the IonMonkey infrastructure, all pieces that still exist in the final codebase.

In late August 2011 we started building out our full-time team, which now includes Jan de Mooij, Nicolas Pierron, Marty Rosenberg, Sean Stangl, Kannan Vijayan, and myself. (I’d also be remiss not mentioning SpiderMonkey alumnus Chris Leary, as well as 2012 summer intern Eric Faust.) For the past year, the team has focused on driving IonMonkey forward, building out the architecture, making sure its design and code quality is the best we can make it, all while improving JavaScript performance.

It’s really rewarding when everyone has the same goals, working together to make the project a success. I’m truly thankful to everyone who has played a part.


Over the next few weeks, we’ll be blogging about the major IonMonkey components and how they work. In brief, I’d like to highlight the optimization techniques currently present in IonMonkey:

  • Loop-Invariant Code Motion (LICM), or moving instructions outside of loops when possible.
  • Sparse Global Value Numbering (GVN), a powerful form of redundant code elimination.
  • Linear Scan Register Allocation (LSRA), the register allocation scheme used in the HotSpot JVM (and until recently, LLVM).
  • Dead Code Elimination (DCE), removing unused instructions.
  • Range Analysis; eliminating bounds checks (will be enabled after bug 765119)

Of particular note, I’d like to mention that IonMonkey works on all of our Tier-1 platforms right off the bat. The compiler architecture is abstracted to require minimal replication of code generation across different CPUs. That means the vast majority of the compiler is shared between x86, x86-64, and ARM (the CPU used on most phones and tablets). For the most part, only the core assembler interface must be different. Since all CPUs have different instruction sets – ARM being totally different than x86 – we’re particularly proud of this achievement.

Where and When?

IonMonkey is enabled by default for desktop Firefox 18, which is currently Firefox Nightly. It will be enabled soon for mobile Firefox as well. Firefox 18 becomes Aurora on Oct 8th, and Beta on November 20th.

* Note: TraceMonkey did have an intermediate layer. It was unfortunately very limited. Optimizations had to be performed immediately and the data structure couldn’t handle after-the-fact optimizations.

Incremental GC in Firefox 16!

Firefox 16 will be the first version to support incremental garbage collection. This is a major feature, over a year in the making, that makes Firefox smoother and less laggy. With incremental GC, Firefox responds more quickly to mouse clicks and key presses. Animations and games will also draw more smoothly.

The basic purpose of the garbage collector is to collect memory that JavaScript programs are no longer using. The space that is reclaimed can then be reused for new JavaScript objects. Garbage collections usually happen every five seconds or so. Prior to incremental GC landing, Firefox was unable to do anything else during a collection: it couldn’t respond to mouse clicks or draw animations or run JavaScript code. Most collections were quick, but some took hundreds of milliseconds. This downtime can cause a jerky, frustrating user experience. (On Macs, it causes the dreaded spinning beachball.)

Incremental garbage collection fixes the problem by dividing the work of a GC into smaller pieces. Rather than do a 500 millisecond garbage collection, an incremental collector might divide the work into fifty slices, each taking 10ms to complete. In between the slices, Firefox is free to respond to mouse clicks and draw animations.

I’ve created a demo to show the difference made by incremental GC. If you’re running a Firefox 16 beta, you can try it out here. (If you don’t have Firefox 16, the demo will still work, although it won’t perform as well.) The demo shows GC performance as an animated chart. To make clear the difference between incremental and non-incremental GC, I’ll show two screenshots from the demo. The first one was taken with incremental GC disabled. Later I’ll show a chart with incremental collections enabled. Here is the non-incremental chart:

Time is on the horizontal axis; the red dot moves to the right and shows the current time. The vertical axis, drawn with a log scale, shows the time it takes to draw each frame of the demo. This number is the inverse of frame rate. Ideally, we would like to draw the animation at 60 frames per second, so the time between frames should be 1000ms / 60 = 16.667ms. However, if the browser needs to do a garbage collection or some other task, then there will be a longer pause between frames.

The two big bumps in the graph are where non-incremental garbage collections occured. The number in red shows that the time of the worst bump–in this case, 260ms. This means that the browser was frozen for a quarter second, which is very noticeable. (Note: garbage collections often don’t take this long. This demo allocates a lot of memory, which makes collections take longer to demonstrate the benefits of incremental GC.)

To generate the chart above, I disabled incremental GC by visiting about:config in the URL bar and setting the javascript.options.mem.gc_incremental preference to false. (Don’t forget to turn it on again if you try this yourself!) If I enable incremental GC, the chart looks like this:

This chart also shows two collections. However, the longest pause here is only 67ms. This pause is small enought that it is unlikely to be discernible. Notice, though, that the collections here are more spread out. In the top image, the 260ms pause is about 30 pixels wide. In the bottom image, the GCs are about 60 pixels wide. That’s because the incremental collections in the bottom chart are split into slices; in between the slices, Firefox is drawing frames and responding to input. So the total duration of the garbage collection is about twice as long. But it is much less likely that anyone will be affected by these shorter collection pauses.

At this point, we’re still working heavily on incremental collection. There are still some phases of collection that have not been incrementalized. Most of the time, these phases don’t take very long. But users with many tabs open may still see unacceptable pauses. Firefox 17 and 18 will have additional improvements that will decrease pause times even more.

If you want to explore further, you can install MemChaser, an addon for Firefox that shows garbage collection pauses as they happen. For each collection, the worst pause is displayed in the addon bar at the bottom of the window. It’s important to realize that not all pauses in Firefox are caused by garbage collection. You can use MemChaser to correlate the bumps in the chart with garbage collections reported by MemChaser.

If there is a bump when no garbage collection happened, then something else must have caused the pause. The Snappy project is a larger effort aimed at reducing pauses in Firefox. They have developed tools to figure out the sources of pauses (often called “jank”) in the browser. Probably the most important tool is the SPS profiler. If you can reliably reproduce a pause, then you can profile it and figure out what Firefox code was running that made us slow. Then file a bug!

Introducing the official Mozilla JavaScript team blog

Mozilla’s mission is “to promote openness, innovation and opportunity on the web.” Here on the JavaScript engine team we have unique opportunities to support this mission. Our work stretches from technical challenges like Incremental Garbage Collection to working out the details of new language features in

We have many great projects going on that we are excited to share with you. This blog will make it easier for everyone to keep up and stay involved with SpiderMonkey as the team helps build a better web.