IonMonkey in Firefox 18

David Anderson

44

Today we enabled IonMonkey, our newest JavaScript JIT, in Firefox 18. IonMonkey is a huge step forward for our JavaScript performance and our compiler architecture. But also, it’s been a highly focused, year-long project on behalf of the IonMonkey team, and we’re super excited to see it land.

SpiderMonkey has a storied history of just-in-time compilers. Throughout all of them, however, we’ve been missing a key component you’d find in typical production compilers, like for Java or C++. The old TraceMonkey*, and newer JägerMonkey, both had a fairly direct translation from JavaScript to machine code. There was no middle step. There was no way for the compilers to take a step back, look at the translation results, and optimize them further.

IonMonkey provides a brand new architecture that allows us to do just that. It essentially has three steps:

  1. Translate JavaScript to an intermediate representation (IR).
  2. Run various algorithms to optimize the IR.
  3. Translate the final IR to machine code.

We’re excited about this not just for performance and maintainability, but also for making future JavaScript compiler research much easier. It’s now possible to write an optimization algorithm, plug it into the pipeline, and see what it does.

Benchmarks

With that said, what exactly does IonMonkey do to our current benchmark scores? IonMonkey is targeted at long-running applications (we fall back to JägerMonkey for very short ones). I ran the Kraken and Google V8 benchmarks on my desktop (a Mac Pro running Windows 7 Professional). On the Kraken benchmark, Firefox 17 runs in 2602ms, whereas Firefox 18 runs in 1921ms, making for roughly a 26% performance improvement. For the graph, I converted these times to runs per minute, so higher is better:

On Google’s V8 benchmark, Firefox 15 gets a score of 8474, and Firefox 17 gets a score of 9511. Firefox 18, however, gets a score of 10188, making it 7% faster than Firefox 17, and 20% faster than Firefox 15.

We still have a long way to go: over the next few months, now with our fancy new architecture in place, we’ll continue to hammer on major benchmarks and real-world applications.

The Team

For us, one of the coolest aspects of IonMonkey is that it was a highly-coordinated team effort. Around June of 2011, we created a somewhat detailed project plan and estimated it would take about a year. We started off with four interns – Andrew Drake, Ryan Pearl, Andy Scheff, and Hannes Verschore – each implementing critical components of the IonMonkey infrastructure, all pieces that still exist in the final codebase.

In late August 2011 we started building out our full-time team, which now includes Jan de Mooij, Nicolas Pierron, Marty Rosenberg, Sean Stangl, Kannan Vijayan, and myself. (I’d also be remiss not mentioning SpiderMonkey alumnus Chris Leary, as well as 2012 summer intern Eric Faust.) For the past year, the team has focused on driving IonMonkey forward, building out the architecture, making sure its design and code quality is the best we can make it, all while improving JavaScript performance.

It’s really rewarding when everyone has the same goals, working together to make the project a success. I’m truly thankful to everyone who has played a part.

Technology

Over the next few weeks, we’ll be blogging about the major IonMonkey components and how they work. In brief, I’d like to highlight the optimization techniques currently present in IonMonkey:

  • Loop-Invariant Code Motion (LICM), or moving instructions outside of loops when possible.
  • Sparse Global Value Numbering (GVN), a powerful form of redundant code elimination.
  • Linear Scan Register Allocation (LSRA), the register allocation scheme used in the HotSpot JVM (and until recently, LLVM).
  • Dead Code Elimination (DCE), removing unused instructions.
  • Range Analysis; eliminating bounds checks (will be enabled after bug 765119)

Of particular note, I’d like to mention that IonMonkey works on all of our Tier-1 platforms right off the bat. The compiler architecture is abstracted to require minimal replication of code generation across different CPUs. That means the vast majority of the compiler is shared between x86, x86-64, and ARM (the CPU used on most phones and tablets). For the most part, only the core assembler interface must be different. Since all CPUs have different instruction sets – ARM being totally different than x86 – we’re particularly proud of this achievement.

Where and When?

IonMonkey is enabled by default for desktop Firefox 18, which is currently Firefox Nightly. It will be enabled soon for mobile Firefox as well. Firefox 18 becomes Aurora on Oct 8th, and Beta on November 20th.

* Note: TraceMonkey did have an intermediate layer. It was unfortunately very limited. Optimizations had to be performed immediately and the data structure couldn’t handle after-the-fact optimizations.

Incremental GC in Firefox 16!

Bill McCloskey

24

Firefox 16 will be the first version to support incremental garbage collection. This is a major feature, over a year in the making, that makes Firefox smoother and less laggy. With incremental GC, Firefox responds more quickly to mouse clicks and key presses. Animations and games will also draw more smoothly.

The basic purpose of the garbage collector is to collect memory that JavaScript programs are no longer using. The space that is reclaimed can then be reused for new JavaScript objects. Garbage collections usually happen every five seconds or so. Prior to incremental GC landing, Firefox was unable to do anything else during a collection: it couldn’t respond to mouse clicks or draw animations or run JavaScript code. Most collections were quick, but some took hundreds of milliseconds. This downtime can cause a jerky, frustrating user experience. (On Macs, it causes the dreaded spinning beachball.)

Incremental garbage collection fixes the problem by dividing the work of a GC into smaller pieces. Rather than do a 500 millisecond garbage collection, an incremental collector might divide the work into fifty slices, each taking 10ms to complete. In between the slices, Firefox is free to respond to mouse clicks and draw animations.

I’ve created a demo to show the difference made by incremental GC. If you’re running a Firefox 16 beta, you can try it out here. (If you don’t have Firefox 16, the demo will still work, although it won’t perform as well.) The demo shows GC performance as an animated chart. To make clear the difference between incremental and non-incremental GC, I’ll show two screenshots from the demo. The first one was taken with incremental GC disabled. Later I’ll show a chart with incremental collections enabled. Here is the non-incremental chart:

Time is on the horizontal axis; the red dot moves to the right and shows the current time. The vertical axis, drawn with a log scale, shows the time it takes to draw each frame of the demo. This number is the inverse of frame rate. Ideally, we would like to draw the animation at 60 frames per second, so the time between frames should be 1000ms / 60 = 16.667ms. However, if the browser needs to do a garbage collection or some other task, then there will be a longer pause between frames.

The two big bumps in the graph are where non-incremental garbage collections occured. The number in red shows that the time of the worst bump–in this case, 260ms. This means that the browser was frozen for a quarter second, which is very noticeable. (Note: garbage collections often don’t take this long. This demo allocates a lot of memory, which makes collections take longer to demonstrate the benefits of incremental GC.)

To generate the chart above, I disabled incremental GC by visiting about:config in the URL bar and setting the javascript.options.mem.gc_incremental preference to false. (Don’t forget to turn it on again if you try this yourself!) If I enable incremental GC, the chart looks like this:

This chart also shows two collections. However, the longest pause here is only 67ms. This pause is small enought that it is unlikely to be discernible. Notice, though, that the collections here are more spread out. In the top image, the 260ms pause is about 30 pixels wide. In the bottom image, the GCs are about 60 pixels wide. That’s because the incremental collections in the bottom chart are split into slices; in between the slices, Firefox is drawing frames and responding to input. So the total duration of the garbage collection is about twice as long. But it is much less likely that anyone will be affected by these shorter collection pauses.

At this point, we’re still working heavily on incremental collection. There are still some phases of collection that have not been incrementalized. Most of the time, these phases don’t take very long. But users with many tabs open may still see unacceptable pauses. Firefox 17 and 18 will have additional improvements that will decrease pause times even more.

If you want to explore further, you can install MemChaser, an addon for Firefox that shows garbage collection pauses as they happen. For each collection, the worst pause is displayed in the addon bar at the bottom of the window. It’s important to realize that not all pauses in Firefox are caused by garbage collection. You can use MemChaser to correlate the bumps in the chart with garbage collections reported by MemChaser.

If there is a bump when no garbage collection happened, then something else must have caused the pause. The Snappy project is a larger effort aimed at reducing pauses in Firefox. They have developed tools to figure out the sources of pauses (often called “jank”) in the browser. Probably the most important tool is the SPS profiler. If you can reliably reproduce a pause, then you can profile it and figure out what Firefox code was running that made us slow. Then file a bug!

Introducing the official Mozilla JavaScript team blog

Naveed Ihsanullah

3

Mozilla’s mission is “to promote openness, innovation and opportunity on the web.” Here on the JavaScript engine team we have unique opportunities to support this mission. Our work stretches from technical challenges like Incremental Garbage Collection to working out the details of new language features in JavaScript.next.

We have many great projects going on that we are excited to share with you. This blog will make it easier for everyone to keep up and stay involved with SpiderMonkey as the team helps build a better web.