Main menu:

Site search

Categories

Archive

Mozilla JS Development Newsletter 1/25/2012-2/29/2012

I waited a couple weeks after the last update to accumulate more cool stuff to talk about, and then I ended up having to wait a few more to get sick, get better, travel, give a talk, and then (partially) dig out from the backlog. Turns out there’s a ton of updates by now:

GC

A big thanks and congrats to Bill McCloskey, who recently landed Incremental GC to Nightly! What was once a very noticeable 100-200ms GC pause every 10 seconds or so is now a sequence of 10ms hardly noticeable mini-pauses. Subjectively, it’s a huge improvement on the games and demos that I’ve tried.

There aren’t too many GC benchmarks out there to try to get ‘objective’ with, but there is Google’s Spinning Balls GC pause test. Spinning Balls animates a bunch of circles while allocating lots of memory (by running the v8-splay benchmark code during the animation). The test shows the distribution of pause times, and a dimensionless score that seems to heavily penalize long pauses, even if there are few. On Firefox 10, the pauses are about 70ms, and I get a score of 139. In Nightly, with IGC on, the pauses are mostly 30ms, and I get a score of 674.

You may have noticed that I said IGC makes pauses 10ms, so why do I get 30ms pauses in Spinning Balls? The main reason is that Spinning Balls actually measures the time delta between animation frames, and frames are 16ms apart for 60Hz animation. If a 10ms GC happens to overlap a frame refresh, then we’ll miss one, so the delta will be measured as 33ms. Secondarily, there are some short pauses associated with GC that aren’t incrementalized yet, which also makes us miss about 1 frame every GC or so. Specifically, finalizing objects (e.g., DOM objects) isn’t incremental, which can generate 20ms pauses or so. Also, we currently have to throw away compiled native code on GC, and recompiling can pause us for a few more ms. We plan to fix both of those limitations in followup work.

If you’re testing IGC on nightlies, one thing to watch out for is that add-ons (and potentially browser features as well) can disable IGC. You can check whether it’s enabled by going to about:support, scrolling to the bottom, and looking for “Incremental GC”. If it says “1″, IGC is enabled. If it’s “0″, please test disabling add-ons and/or file a bug so we can fix it.

Why does IGC get disabled? IGC relies on “write barriers” to help the main program and the GC cooperate. Binary JavaScript components need to implement write barriers for any special objects they create, or else IGC is unsafe. So, if the browser detects anything that wasn’t coded with IGC in mind and would cause a problem, it disables IGC.

If all goes well in the testing channels, and things are looking good so far, IGC will go out in Firefox 13.

[update 5:52pm 3/2]
Aargh, I forgot to mention, that’s not even all the GC work that’s been going on: Terrence Cole added write barriers for generational GC, and he’s about done refactoring the mark phase to support moving GC. So we’re just about ready to start implementing moving GC inside the JS engine.

New Hire

Please welcome Kannan Vijayan to the IonMonkey team! He’s working in the Toronto office, our first JS person in Toronto. He’s taken on property deletion for IonMonkey as his first patch.

Goodbye

Chris Leary has left the JS team to try his hand at a startup. During his time on the JS team, Chris got regular expressions squared away for Firefox 4, contributed to JägerMonkey, implemented JIT hardening for JägerMonkey, created InfoMonkey, fixed the usual pile of bugs, and wrote hilarious blog entries. His final mission was on the IonMonkey team, where he mentored Andrew Drake building the register allocator, and implemented function inlining and the all-important on-stack invalidation. Best wishes on your new venture!

ES6

Jason Orendorff landed ES6 for-of, which is a new iteration construct over values that also works with ES6 generators and iterators.

Jason has also been doing some very cool work to help TC39 specify some ES6 constructs. The draft spec for Set gives only one constructor, with no arguments, creating an empty set. Jason argued for adding a constructor that takes an iterable argument, did some measurements showing the corresponding one-argument form was popular in Python, got support from some committee members, and implemented it for SpiderMonkey.

Jason also did some experiments on Maps that iterate over their elements in the order they were added, a.k.a., “deterministic hash tables”. Most map constructs in most languages don’t specify an iteration order, presumably because defined order is expected to cost a lot in performance. Jason’s experiments found that deterministic hash tables aren’t really any slower, although they do use more memory. It’s a fairly promising result, and either way, having the data can only help the committee.

IonMonkey

IonMonkey is now mostly doing optimizations and adding support for more JS features. They recently got a big improvement in their Kraken score. Check out their score on desaturate.

Super Snappy + Sandboxing

Brian Hackett’s got some interesting new ideas. First, he’s working on “Super Snappy”, which allows the browser to run UI and content on separate threads, so that if content lags (e.g., ilooping in JS), the UI can still respond. There’s a WIP patch in that bug.

Second, he’s got an idea for bringing sandboxing to Firefox. It’s just an idea so far but it looks promising to me.

Brian also landed “chunked compilation” a while back. Background: JM+TI uses types to compile JS functions to native code. If its assumptions about types are later invalidated, it may have to recompile the function. Say you have a program with 1000 small functions, and types get invalidated 10 times. That means recompiling 10 small functions, which takes negligible time. Now say you have a program of equal size but 1 large function. If types get invalidated 10 times, that means recompiling the 1000x-as-big function 10 times, which is very noticeable. Chunked compilation cuts large functions into pieces that are small enough so that recompilation is not a problem. This is particularly important for Emscripten programs.

JIT Hardening

Chris Leary landed JIT hardening for JägerMonkey. It’s impossible to know how vulnerable to JIT spraying we actually were before, but we should at least be much less vulnerable now.

Stuff that may affect you

Jeff Walden removed sharp variables. Sharp variables were a feature that allowed cyclic data structures to be serialized in a JSON-like format (using things like ‘#3′ as names for elements, so multiple things could point to them). But sharps were non-standard and not in use on the open web, so we took them out as part of our ongoing cleanup and simplification of SpiderMonkey code.

Jeff’s recent work on switching over our fixed-width integer types (e.g., uint32) to standard types (e.g., uint32_t) also unlocked the ability to get rid of a bunch of other old typedefs, so I’ve been doing that. So far, I’ve taken out JSIntn/JSUintn, intN/uintN, jsrefcount, JSPackedBool, and jsint. jsuint is next. If you’ve been using these, you’ll need to switch to standard types. Find and replace pretty much works. (By the way, I’d like to do something with JSBool eventually, but I’ll need to be more careful so that will take longer. JSBool is 4 bytes, while bool is usually 1 (but not a defined length), so switching JSBool to bool could break things. In particular, if the JITs call functions with JSBool parameters or return values, that would break. Also, MSVC 2005 compiling C doesn’t have ‘bool’. I’ll probably start by switching JSBool to be a typedef for bool, see if that works, and if time goes by without problems, then take out the typedef and just use bool.)