The monkeys in 2013

A monkey. That’s the default name a part in the JavaScript Engine of Mozilla Firefox gets. Even the full engine has its own monkey name, called Spidermonkey. 2013 has been a transformative year for the monkeys. New species have been born and others have gone extinct. I’ll give a small but incomplete overview into the developments that happened.

Before 2013 JaegerMonkey had established itself as the leader of the pack (i.e. the superior engine in Spidermonkey) and was the default JIT compiler in the engine. It was successfully introduced in Firefox 4.0 on March 22nd, 2011. Its original purpose was to augment the first JIT Monkey, TraceMonkey. Two years later it had kicked TraceMonkey out of the door and was absolute ruler in monkey land. Along the ride it had totally changed. A lot of optimizations had been added, the most important being Type Inference. Though there were also drawbacks. JaegerMonkey wasn’t really designed to host all those optimizations and it was becoming harder and harder to add new flashy things and easier and easier to add faults. JaegerMonkey had always been a worthy monkey but was starting to cripple under age.

Improvement of Firefox on the octane benchmark

Improvement of Firefox on the octane benchmark

The year 2013 was only eight days old and together with the release of Firefox 18, a new bad boy was in town, IonMonkey. It had received education from the elder monkeys, as well as from its competitors and inherited the good ideas, while trying to avoid the old mistakes. IonMonkey became a textbook compiler with regular optimization passes only adjusted to work in a JIT environment. I would recommend reading the blogpost of the release for more information about it. Simultaneously, JaegerMonkey was downgraded to a startup JIT to warm up scripts before IonMonkey took over responsibility.

But that wasn’t the only big change. After the release of IonMonkey in Firefox 18 the year 2013 saw the release of Firefox 19, 20, all the way to number 26. Also Firefox 27, 28 and (partly) 29 were developed in 2013. All those releases brought their own set of performance improvements:

Firefox 19 was the second release with IonMonkey enabled. Most work went into improving the new infrastructure of IonMonkey. Another notable improvement was updating Yarr (the engine that executes regular expressions imported from JSC) to the newest release.

Firefox 20 saw range analysis, one of the optimization passes of IonMonkey, refactored. It was improved and augmented with symbolic range analysis. Also this was the first release containing JavaScript self-hosting infrastructure that allows standard, builtin functions to be implemented in JavaScript instead of C++. These functions get the same treatment as normal functions, including JIT compilation. This helps a lot with removing the overhead from calling between C++ and JavaScript and even allows builtin JS functions to be inlined in the caller.

Firefox 21 is the first release where off-thread compilation for IonMonkey was enabled. This moves most of the compilation to a background thread, so that the main thread can happily continue executing JavaScript code.

Firefox 22 saw a big refactoring of how we inline and made it possible to inline a subset of callees at a polymorphic callsite, instead of everything or nothing. A new monkey was also welcomed, called OdinMonkey. OdinMonkey acts as an Ahead of Time compiler optimization pass that reuses most of IonMonkey, kicking in for specific scripts that have been declared to conform to the asm.js subset of JavaScript. OdinMonkey showed immediate progress on the Epic Citadel demo. More recently, Google added an asm.js workload to Octane 2.0 where OdinMonkey provides a nice boost.

Firefox 23 brought another first. The first compiler without a monkey name was released: the Baseline Compiler. It was designed from scratch to take over the role of JaegerMonkey. It is the proper startup JIT JaegerMonkey was forced to be when IonMonkey was released. No recompilations or invalidations in the Baseline Compiler: only saving type information and make it easy for IonMonkey to kick in. With this release IonMonkey was also allowed to kick in 10 times earlier. At this point, Type Inference was now only needed for IonMonkey. Consequently, major parts of Type Inference were moved and integrated directly into IonMonkey improving memory usage.

Firefox 24 added lazy bytecode generation. One of the first steps in JS execution is parsing the functions in a script and creating bytecode for them. (The whole engine consumes bytecodes instead of a raw JavaScript string.) With the use of big libraries, a lot of functions aren’t used and therefore creating bytecode for all these functions adds unnecessary overhead. Lazy bytecode generation allow us to wait until the first execution before parsing a function and avoids parsing functions that are never executed.

Firefox 25 to Firefox 28: No real big performance improvements that stand out. A lot of smaller changes under the hood have landed. Goal: polishing existing features or implementing small improvements. A lot of preparation work went into Exact Rooting. This is needed for more advanced garbage collection algorithms, like Generational GC. Also a lot of DOM improvements were added.

Firefox 29. Just before 2014 Off-thread MIR Construction landed. Now the whole compilation process in IonMonkey can be run off the main thread. No delays during execution due to compiling if you have two or more processors anymore.

Improvement of Firefox on the dromaeo benchmark

Improvement of Firefox on the dromaeo benchmark

All these things resulted in improved JavaScript speed. Our score on Octane v1.0 has increased considerably compared to the start of the year. We now are again competitive on the benchmark. Towards the end of the year, Octane v2.0 was released and we took a small hit, but we were very efficient in finding the opportunities to improve our engine and our score on Octane v2.0 has almost surpassed our Octane v1.0 score. Another example on how the speed of Spidermonkey has increased a lot is the score on the Web Browser Grand Prix on Tom’s Hardware. In those reports, Chrome, Firefox, Internet Explorer and Opera are tested on multiple benchmarks, including Browsermark, Peacekeeper, Dromaeo and a dozen others. During 2013, Firefox was in a steady second place behind Chrome. Unexpectedly, the hard work brought us to the first place on the Web Browser Grand Prix of June 30th.  Firefox 22 was crowned above Chrome and Opera Next. More importantly than all these benchmarks are the reports we get about overall improved JavaScript performance, which is very encouraging.

A new year starts and improving performance is never finished. In 2014 we will try to improve the JavaScript engine even more. The usual fixes and adding of fast paths continues, but also the higher-level work continues. One of the biggest changes we will welcome this year is the landing of Generational GC. This should bring big benefits in reducing the long GC pauses and improving heap usage. This has been an enormous task, but we are close to landing. Other expected boosts include improving DOM access even more, possibly a lightweight way to do chunked compilation in the form of loop compilation, different optimization levels for scripts with different hotness, adding a new optimization pass called escape analysis … and possibly much more.

A happy new year from the JavaScript team!

46 responses

Post a comment

  1. Ferdinand wrote on :

    Great post. I love to read this stuff.
    I have a question about memory usage during Octane 2.0. Chrome uses at its peak about 250MB but Firefox easily reaches 500+MB. Why is this and could exact rooting and the optimizations it brings fix this?

    Reply

    1. YOLO wrote on :

      Why people have this obsession with low ram usage. Who works on IT and have a computer with less than 8 GB?

      That’s enough to run 2 VMs, a heavy IDE and a browser full of cat videos.

      I don’t care about RAM usage, I care about proper RAM cleanup, fixing ram/cpu leaks and such.

      Reply

      1. ZeD wrote on :

        I do. I work with a 2Gb of ram

        Reply

        1. Demy Haer wrote on :

          Zed,
          I too work on a machine with 2Gb ram.

          Yolo,
          I suspect you don’t have much business experience.

          Most companies use their existing workstations as long as possible. At least until the warranty expires. When it is time to buy new they don’t purchase the latest and greatest workstations, most powerful. They get just enough of a machine to get the job done.

          Reply

      2. YABE Yuji wrote on :

        The latest iPad has only 1 GB RAM. Firefox OS phone has only 256 MB.
        And memory cost and speed are not improving much these days even on PCs.

        Reply

      3. hex wrote on :

        Mobile, FirefoxOS…

        Reply

      4. jr wrote on :

        It is a valid concern, a Firefox OS device has 512 MB of RAM. And cheap laptops 2GB.

        Reply

      5. Not Frank wrote on :

        That’s great for folks in IT. Somehow I thought Firefox was for more than just those folks.

        Reply

      6. timeless wrote on :

        On Windows, Firefox is probably still a 32bit application, that limits it to 3GB of ram (1GB is reserved for OS mapped memory).

        While there was work on child content processes, it still means that a page or complex system of pages is limited much below 8 GB.

        Further, there are lots of normal computers running Windows with 4 GB (or less) ram.

        Reply

        1. Matt wrote on :

          This (mostly) isn’t true.

          On a 64-bit OS, even 32-bit applications like Firefox can access the entire 4GB.

          On a 32-bit OS, applications can only access 2GB by default. There was a hidden setting you could change in Windows that would expose a 3rd GB, but it’s unsupported and the vast majority of users won’t take advantage of that.

          Reply

        2. Aiden wrote on :

          Firefox does have a 64-bit application called nightly.

          Reply

      7. Ferdinand wrote on :

        What a weird reaction. If chrome can do the same in less memory why not? I have to use 64bit Firefox because I often go over 2GB of memory usage. Firefox becomes slower and slower the more memory it use. At 2GB it becomes unbearable and I have to restart it.

        Please put your comment in the “640K is enough for everybody” bucket.

        Reply

        1. Wesj wrote on :

          There’s definately more work going on to reduce memory use (driven mostly by low memory FFOS phones. not 640K by < 200MB). Although typically we do much better than Chrome, and leaks like the ones you describe should be pretty rare now (in all the metrics I've seen at least). You might try removing extensions to see if any of them are being bad citizens.

          Reply

      8. theporchrat wrote on :

        I would imagine its because people who use Firefox are not all people who “work on IT”. Lots of people don’t have 8gb of RAM. Our work laptops all come with 4gb, and you can almost guarantee that most people in the developing world have low ram machines

        Reply

      9. Macw666 wrote on :

        When you go to the Computer Science school there is a thing called “efficiency” and a program that print “Hello world” and needs 16gb of ram isn’t efficient enough. Thats the reason everybody cares abot low ram usage, to avoid such things.

        Reply

        1. Liam wrote on :

          That’s but a modern browser is pretty much as far from a hello world as you can get.
          To the point, recent ff should be very efficient. If you’re having issues with memory create a new profile to see if that fixes things. If it does then it is probably an extension (though it could be other things as well which is why creating a new profile is the easiest way to diagnose, and fix, the issue).

          To the author: is there any chance of Yarr being replaced with a more efficient regex parser? Iirc, that’s an area where ff often trails chrome pretty consistently.

          Reply

          1. Hannes Verschore wrote on :

            I want to raise bug 976446. The current idea is to replace yarr with irregexp. Atm I don’t think there is somebody working actively on it (there are bigger fish to fry). So anybody willing to have some fun 😉

      10. zfs wrote on :

        I care! And I work in “IT”.
        I Have got 2GB of ram on my Work PC (WinXP) and 2.5 GB on my personal laptop which runs Debian.

        Mozilla does track the usage and other technical data for improving the experience.

        PS: My Co. is considering the thought to move to Ubuntu, Win7 professional licenses for 200+ PCs is expensive 🙂

        Reply

      11. Animesh wrote on :

        Of what use is your 8GB RAM, If you can’t even run spell-check before you post your comments!? 😉

        Reply

      12. Ronaldo wrote on :

        Agree.Think the same thing 🙂

        Reply

    2. Hannes Verschore wrote on :

      About the difference between Chrome and Firefox when running Octane v2.0:

      I just tested and saw a 500MB increase on both. I’m not really sure how you tested and only saw 250MB. Would be interesting to investigate. I know we still have the memschrink project and we are still trying to improve memory! But as far as I understood is that we are competitive and not lagging behind. You can find more information about it at: https://wiki.mozilla.org/Performance/MemShrink

      One of the coming optimizations that will probably decrease peak memory will be Generational GC. On http://arewefastyet.com/#machine=14 (GGC on FirefoxOS) we see an improvement in octane 2.0 and this is mostly caused due to decreased memory usage. We don’t have to invalidate scripts as often to free memory and can run more in IonMonkey.

      Reply

      1. Caspy7 wrote on :

        Hrm. Why is there such a big difference between the gap in GGC numbers and normal when compared to the numbers at the default http://www.arewefastyet.com/ ?

        Reply

        1. Hannes Verschore wrote on :

          That’s because GGC can handle the low memory on FirefoxOS better than the normal IGC currently shipped. On desktop (the results you are referring too) there is enough memory available. That’s why there is less of a gap on desktop.

          Technically we need more garbage collections on the FirefoxOS due to the constraint memory. A garbage collection removes Type Information and as consequence all IonScripts. So we are kicked out of IonMonkey and need to warm up before we can run again in IonMonkey again. As a result FirefoxOS will run more in baseline and therefore be slower. Now GGC needs less garbage collections. So we can run more in IonMonkey and we will be faster.

          If we would crank up the the available RAM on the Unagi (the device tested on arewefastyet) to 2gb, there wouldn’t be such a big gap between GGC and IGC currently.

          (Note: GGC will allow us to do even better things related to memory. So eventually GGC will have better scores than IGC. The constraint of landing GGC is stabillity and don’t regress performance. Currently we are happy with the performance and are polishing GGC for shipping. After the release more performance improvements to GGC will come.)

          Reply

          1. Erik Harrison wrote on :

            Gah! Why does GC kill compiled scripts? Shouldn’t that only occur in response to a memory-pressure event rather than a standard GC pass? Is there an issue with monkeypatching addressing in the generated code after a GC run?

          2. sfink wrote on :

            Because actual workloads are weirdly bursty. Half of all loaded code never runs, and the vast majority isn’t worth compiling because it doesn’t run enough. Neither of which is relevant to your question. But the stuff that *is* hot doesn’t stay hot forever, so you accumulate compiled code that you aren’t really using. It’s possible that unnecessary ICs build up over time too; I’m not sure. And yes, we patch lots of different addresses.

            But really, I don’t know the answer. As in, why not keep track of hot scripts and avoid throwing them out? I dunno. Maybe the type sets get bloated or something.

          3. Hannes Verschore wrote on :

            The reason is TI. Currently (actually not anymore) we couldn’t remove TI from one script only. It was remove everything or nothing. Since IonMonkey uses TI information and that information is invalidated during GC, the ionscript must get removed too.

            Since a few weeks that has changed and we can keep ionscripts during GC. We just don’t do it yet. So that’s something we expect to fix this year 😉

    3. Sherri – Dallas, TX wrote on :

      I LUV reading this stuff, too, even though I don’t understand the majority of it! I’m just a simple consumer who loves knowing there’s a browser out there that truly cares about our privacy and protection. THANK YOU!!!

      Reply

  2. Monkeyless wrote on :

    You guys really should rename Baseline to Basement Monkey 😉

    Reply

    1. Luke Wagner wrote on :

      Yeah, then we could get a shirt!

      Reply

  3. ᙇᓐ M. Edward Borasky ( wrote on :

    When does the next version of the stand-alone Mozilla JavaScript engine go stable? It looks like Spidermonkey 24 is the most recent stable version. Will generational GC be in the next stable release?

    Reply

  4. abcd wrote on :

    Wasnt there some talk of combining Off-thread MIR’ing and optimisation levels to use each core for compiling different parts of the code together

    Reply

    1. Hannes Verschore wrote on :

      I think for now we will stick with maximum 1 core handling compilation. This drains less battery power (e.g. for laptops, tablets, phones) and also interferes less with running code.

      The idea with optimizations levels is that there is a tread-off between compilation time and how soon we can run in IonMonkey. Normal compilers just compile for best optimization and don’t care about the time it took to compile. A JIT compiler cares about this. We sometimes need to settle for less optimal code for faster compilation time. With optimization levels we can recompile a script with higher optimization (taking longer) when deemed interesting. This is all still research and optimization levels isn’t working yet, only the infrastructure has landed. That’s for 2014 😉

      Reply

      1. Caspy7 wrote on :

        Can we yet adjust based on the current plugged in state and perhaps other relevant hardware factors?

        Reply

  5. Arglborps wrote on :

    I find Chrome to be quite the memory hog (on OS X). It’s just not that obvious because it’s using so many background services, so the Chrome application looks slim in RAM usage, but if you add up all the other background tasks that Chrome is running, it never runs with less than 1GB RAM usage. Firefox on the contrary tends to start of using around 500MB of RAM and doesn’t bloat as quickly and badly as Firefox (or Safari for that matter).

    Reply

  6. Kevin Lark wrote on :

    Firefox is pretty on my Macbook with 16Gb’s of ram 😉

    Reply

  7. hamada wrote on :

    When does the next version of the stand-alone Mozilla JavaScript engine go stable? It looks like Spidermonkey 24 is the most recent stable version. Will generational GC be in the next stable release?

    Reply

  8. will wrote on :

    i am so glad, at the age of 34, i am starting school again. i say this with pseudo contempt. this fall, a tech school based on web design/networking, all of my knowledge, self taught, all of my typing skills, with two fingers.
    this time next year i may finally understand what the heck you guys are talking about. may the force be with me, along with capital letters and less commas.

    what does the firefox say?

    Reply

  9. blitter wrote on :

    Been developing a 3d voxel engine. My latest creation is a 3d sand simulator (cutting edge), on FireFox I am maintaining a 60fps sustained output. Far beyond any other browser, watch this space!

    Reply

  10. it wrote on :

    for the record, i have 32gb of ram and im not an IT guy

    Reply

  11. tigertownpc wrote on :

    It is disappointing that the Javascript team has not put out a blog since January. It would be very nice to get an update on all the happenings the last 6 months.

    Reply

  12. akbar jan wrote on :

    we salute to the people of US, who bring such helpful technology
    to the rest of the world.
    thanks

    Reply

  13. Frank Gericke wrote on :

    Nice to meet you! – In case you need french, spanish or any other language, I will be glad to help you!

    Mozilla is one of the best – I would even say: the very best in the globalized world – no boundaries, freedom, information… just terrific !!!!

    Hope that you all enjoy Mozilla – be it Thunderbird or Firefox, the clarity, the frankness or whatever this team offers since I don’t know when. – I am really totally – pst, you know what…

    Love

    Frank, Germany, Dortmund (BVB OLé – nevertheless I love Arjen too – don’t tell anybody – please!)

    Reply

  14. teddy wrote on :

    por que ponen el navegador actual con java scrip v 8 seria bueno pra firefox agame caso yo se de lo que se estan perdiendo en velocidad en navegadores ahi les dejo el dato QUE VIVA FIREFOX

    Reply

  15. Pronoy Mukherjee wrote on :

    The only problem with firefox is that it takes a lot of time start, if by any means you could fix that then it will be the best Web Browser Ever!

    Reply

  16. Firanolfind wrote on :

    Congrats! I always was on firefox side!
    Now you need to build FF engine based NodeJs!!! Yahoooo!

    For those who said chrome take less memory, just check chrome task manager and you will be amazed how much it uses memory actually.
    On my pc only stupid chrome Graphics process takes 1.5 Gb only for two ~ three tabs opened. I do not know is it a bug or what, but that Firefox stable compared to webkit based browsers as a bulldozer is true. Chrome crashes every session, but firefox can stay day by days smoothly just working.

    Reply

  17. George Mitchell wrote on :

    I am not a programmer, but really enjoyed that explanation of what is going on. Thanks!

    Reply

Post Your Comment