Incremental GC in Firefox 16!

Firefox 16 will be the first version to support incremental garbage collection. This is a major feature, over a year in the making, that makes Firefox smoother and less laggy. With incremental GC, Firefox responds more quickly to mouse clicks and key presses. Animations and games will also draw more smoothly.

The basic purpose of the garbage collector is to collect memory that JavaScript programs are no longer using. The space that is reclaimed can then be reused for new JavaScript objects. Garbage collections usually happen every five seconds or so. Prior to incremental GC landing, Firefox was unable to do anything else during a collection: it couldn’t respond to mouse clicks or draw animations or run JavaScript code. Most collections were quick, but some took hundreds of milliseconds. This downtime can cause a jerky, frustrating user experience. (On Macs, it causes the dreaded spinning beachball.)

Incremental garbage collection fixes the problem by dividing the work of a GC into smaller pieces. Rather than do a 500 millisecond garbage collection, an incremental collector might divide the work into fifty slices, each taking 10ms to complete. In between the slices, Firefox is free to respond to mouse clicks and draw animations.

I’ve created a demo to show the difference made by incremental GC. If you’re running a Firefox 16 beta, you can try it out here. (If you don’t have Firefox 16, the demo will still work, although it won’t perform as well.) The demo shows GC performance as an animated chart. To make clear the difference between incremental and non-incremental GC, I’ll show two screenshots from the demo. The first one was taken with incremental GC disabled. Later I’ll show a chart with incremental collections enabled. Here is the non-incremental chart:

Time is on the horizontal axis; the red dot moves to the right and shows the current time. The vertical axis, drawn with a log scale, shows the time it takes to draw each frame of the demo. This number is the inverse of frame rate. Ideally, we would like to draw the animation at 60 frames per second, so the time between frames should be 1000ms / 60 = 16.667ms. However, if the browser needs to do a garbage collection or some other task, then there will be a longer pause between frames.

The two big bumps in the graph are where non-incremental garbage collections occured. The number in red shows that the time of the worst bump–in this case, 260ms. This means that the browser was frozen for a quarter second, which is very noticeable. (Note: garbage collections often don’t take this long. This demo allocates a lot of memory, which makes collections take longer to demonstrate the benefits of incremental GC.)

To generate the chart above, I disabled incremental GC by visiting about:config in the URL bar and setting the javascript.options.mem.gc_incremental preference to false. (Don’t forget to turn it on again if you try this yourself!) If I enable incremental GC, the chart looks like this:

This chart also shows two collections. However, the longest pause here is only 67ms. This pause is small enought that it is unlikely to be discernible. Notice, though, that the collections here are more spread out. In the top image, the 260ms pause is about 30 pixels wide. In the bottom image, the GCs are about 60 pixels wide. That’s because the incremental collections in the bottom chart are split into slices; in between the slices, Firefox is drawing frames and responding to input. So the total duration of the garbage collection is about twice as long. But it is much less likely that anyone will be affected by these shorter collection pauses.

At this point, we’re still working heavily on incremental collection. There are still some phases of collection that have not been incrementalized. Most of the time, these phases don’t take very long. But users with many tabs open may still see unacceptable pauses. Firefox 17 and 18 will have additional improvements that will decrease pause times even more.

If you want to explore further, you can install MemChaser, an addon for Firefox that shows garbage collection pauses as they happen. For each collection, the worst pause is displayed in the addon bar at the bottom of the window. It’s important to realize that not all pauses in Firefox are caused by garbage collection. You can use MemChaser to correlate the bumps in the chart with garbage collections reported by MemChaser.

If there is a bump when no garbage collection happened, then something else must have caused the pause. The Snappy project is a larger effort aimed at reducing pauses in Firefox. They have developed tools to figure out the sources of pauses (often called “jank”) in the browser. Probably the most important tool is the SPS profiler. If you can reliably reproduce a pause, then you can profile it and figure out what Firefox code was running that made us slow. Then file a bug!

24 responses

Post a comment

  1. Ed wrote on :

    I am on Aurora 16a channel, and which should have IGC enabled, ( I checked about:config and it was enabled. ) My highest pause were up to 1000ms, with many of the spike at 300 – 500ms. So my graph looks much more like your IGC disabled graph.

    And has work on Generational GC started yet?

    Reply

  2. Florian wrote on :

    “However, the longest pause here is only 67ms. This pause is small enought that it is unlikely to be discernible.”

    I’m sorry but that is plain wrong. A webgl or canvas game running at 60FPS will take 16.6ms per frame. 67ms is more than 4 full frames. This alone would be already *very* noticable. But the scheduling worsens this situations, because the frame before and after the collection cycle also have time used up (say 10ms all told) so a 67ms hickup can delay the next callback out of requestAnimationFrame up to 6-7 frames.

    Now please consider this: The Oculus Rift team (VR headset) states that from their measurement an input -> display latency of 20ms is the desired value for head tracking, and the maximum is 40ms beyond which nausea effects start to manifest.

    If you’re talking about game rendering, you need to consider that the holy canonial unit of time is in the range of around 16ms. Anything, any job that needs doing needs to fit neatly into that time window. This includes things like physics (1ms allocation), audio synthesis (give it another 1ms), AI (poor sods usually only get something like 0.5 to 0.25ms), rendering (generously allocated maybe 5ms).

    As a rule of thumb you can assume about 2/3 of the frame time being taken up by actually running the game. Preferably GC-ing would take 0 time, but that’s impossible. But again as a rule of thumb, it should absolutely *NOT* take more than 5ms per frame. Full stop. That’s as high as it can go. Any more than that and you start skipping frames again. Not acceptable.

    Reply

    1. Robert O’Callahan wrote on :

      You’re quite right, but guaranteed 5ms maximum latency is incredibly difficult to hit and downright impossible in a Web browser. (The only GCs I know of that could achieve that target are research projects in very constrained environments, and also have terrible throughput.)

      To make progress towards guaranteed frame rates for games, we need to make WebGL accessible by Web Workers and put as much game logic as possible in Workers. That will protect game code from all the crazy stuff that happens on the HTML5 event loop. We’ll still need to do a lot of work to reduce IGC though.

      Reply

      1. Florian wrote on :

        The program from the paste in python allocates around 20kb of data every 10ms and the delta to the last time the loop ran is never more than 16ms… http://pastebin.com/15Rf2M1h

        It’s not guaranteed of course. It just happens to be working pretty well. That’s where you want to be, not at some 60ms.

        Reply

        1. Arne Babenhauserheide wrote on :

          The difference here is that Python does reference counting, which is like incremental garbage collection at every single variable access.

          But it has the disadvantage of making it impossible to exploit multiple cores with multithreading (that’s why python folks often directly focus on multiprocessing and message passing).

          Reply

  3. Jasmine Kent wrote on :

    Definitely a step in the right direction, but +1 to Florian’s comment: 67ms can most certainly be discernible in animated graphics.

    It would be great if we could activate this incremental garbage collection manually each frame, ideally with a maximum time limit. If our code is targeting a 16ms frame time and we complete our processing within 11ms, then we could tell the garbage collector that it can use up to 5ms to do as much incremental collection as it can within that time without fear of dropping a frame.

    Reply

    1. mccr8 wrote on :

      This bug has some discussion of the issues involved with this idea: https://bugzilla.mozilla.org/show_bug.cgi?id=730113

      Reply

  4. Igor Soarez wrote on :

    Silly question here, why must garbage collection stop UI events and js execution? Couldn’t the GC just run in a separate thread?

    Reply

    1. mccr8 wrote on :

      It can be done, but the garbage collector is looking at the same objects that the JS currently running is touching, so it must be done carefully. That said, the Firefox GC actually does do some work on a separate thread: some types of objects can be thrown away once they are known to be garbage without affecting the main thread.

      Reply

    2. Rursus wrote on :

      A very unsilly question that have boggled the minds of computer scientists for ages, but yes, it can be done. RScheme (http://www.rscheme.org/) implemented it already in about 1998, but it seems the technology is hard to grasp by the implementors, however important.

      Reply

      1. Brunoais wrote on :

        An idea I never saw here discussed about GC is the implementation of a thread dedicated to GC (I already saw about using a separated thread for it, but not its implementation).

        My idea would be to use a variable access counter. For each object, you associate a counter that counts how many references exist for that object. The idea is simple but not so easy to implement.
        The counter starts at 0.
        new Object();
        Counter is at 0.
        var a = new Object();
        Counter at 1.
        var a = new Object();
        var b = a
        counter at 2.
        b = null
        counter at 1.

        For local variables in functions:
        For each execution of the function:
        // a = null, b = new Object1();
        //counter: Object1: 1
        function (a, b){
        //counter: Object1: 1
        var c = new Object2();
        //counter: Object1: 1, Object 2: 1
        a = c;
        //counter: Object1: 1, Object 2: 2
        return;
        }
        Now it does like this:
        variable c is local so it’s gone now. As c is gone, Object2’s counter:1
        c is collected but the objects are not collected because the counter is 1.
        In this situation:
        function (a){
        //counter: Object1: 1
        var c = new Object();
        //counter: Object 2: 1
        var b = c;
        //counter: Object 2: 2
        return;
        }
        b is collected, the counter is reduced. The c is collected, the counter is reduced again. Now there’s no way to reference Object2 in this execution, so you may freely collect Object2. Not trivial to implement but this allows the javascript to never stop and to the GC to work in background the whole time.
        Now… does that counting take too much processing time? I don’t think so, but I may be wrong.
        Note: This way of doing it takes a little more CPU clock to do the GC but it’s all in the background, so no unresponsive problems related to GC.

        Reply

  5. Forrest O. wrote on :

    Awesome. In my very unscientific test, this is bumping canvas performance up to Chome’s 60fps level. The test does a recursive image transform between two canvases to draw shapes: http://meemoo.org/iframework/#gist/3613776

    Reply

  6. ghaith wrote on :

    Awesome! i downloaded the final version of firefox 16 and at last FIREFOX ISN’T LAGGY thanks alot.!

    Reply

  7. Heiko Wengler wrote on :

    Using Firefox 16 released yesterday on Linux Ubuntu with memchaser… and FF 16 running google reader SUCKS. iGC times of 250ms and up…
    The machine has 2+GB memory. So where is the switch to USE the memory please???

    FF 16 with more than 400mb resident ram is “sleeping” eg iGCing 300ms+ all the time…

    Please test FF with google reader and make it FAST!

    Heiko

    Reply

  8. Pingback from New garbage collection in Firefox 16 | Browser Download on :

    […] is of course an essential part of using the web browser. The in Firefox 16 newly integrated incremental garbage collection ensures that every time the memory is gone through, only a small amount of space have to be […]

    Reply

  9. Pingback from multiPetros » Firefox 16 Released on :

    […] Improvements around JavaScript responsiveness through incremental garbage collection […]

    Reply

  10. Chris wrote on :

    Ok not sure why mozilla advertised this feature twice as was also in ff15 so it isnt new in 16.

    When I turned it on in ff15 it had a huge improvement, firefox is still leaking memory like crazy but it no longer stutters and stalls once it gets over about 800meg usage.

    I now have had firefox leaking up to over 2 gig memory usage and was still very responsive (also good job I have tons of ram eh!).

    Reply

  11. Ralf wrote on :

    what about having javascript statements to manually free memory and turning GC on and off for those who need it?

    Reply

  12. ~ wrote on :

    Given the comments on laggy animation, probably it’s a matter of having the garbage collector to run more at the time of calls to memory allocation/deallocation functions more than just waking up that “thread” in a polled fashion just because “it is time”.

    The garbage-collecting process should probably be made to take less time to complete each “stage” when this “thread” wakes up, to be less intrusive with blocking CPU time, but at the same time be called more frequently and more intelligently.

    As well as find a more optimal point in which to give more priority to user JavaScript code automatically, even based on giving (much) more execution time to JavaScript and other threads, that is executing in the current window and in the current tab.

    Reply

  13. Arne Babenhauserheide wrote on :

    Firstoff I want to applaud you: The improvement from 260ms to 70ms is quite huge!

    This still does not suffice for getting web apps fluid, but it definitely improves them a lot.

    From personal experience, I can feel 30ms lag on the command line, though. It’s the diffence between “unnoticable” and “fast”.

    Reply

  14. trlkly wrote on :

    Maybe I’m unusual, but I seem to get better performance with garbage collection off on my single-core Atom netbook with 1 GB of memory. Sure, there are bigger spikes on that graph, but the biggest spikes are much lower. With incremental garbage collection enabled, I wind up with spikes greater than 1000ms on that test.

    Reply

    1. trlkly wrote on :

      In case anyone cares, the problem is with memory that has been cached to disk. That seems to murder Firefox. Right now, I am working around it by using the Suspend Tab extension, which removes tabs from memory after 30 minutes of inactivity. This keeps my memory from getting cached out to disk.

      Reply

  15. J. MacAuslan wrote on :

    Why is it STILL so laggy?!

    Details: FF 16.0.2, with MemChaser. Win7x64, 4 GB RAM. Several other apps of various sizes open but inactive.

    MemChaser shows (for a typical moment, i.e., right now): Resident 868 MB, iGC 963ms (yes, nine hundred milliseconds!), 20 seconds since the previous GC, with 187 ms for cycle collection.

    Sometimes it’s worse: A nice 25 sec since the previous GC, but TWO SECONDS for the GC itself. This is just stultifyingly lumpy.

    Realistic advice? (Sorry, a better PC isn’t in my immediate future.)

    Reply

  16. Papou wrote on :

    My present problem with 15.0.1 is that the system, that is idle or almost, waits until I open a window or click something in Firefox to starts a period of 10-15 or more seconds of iowait during which the whole system is frozen, even the mouse pointer is being incredibly lagging.
    I think that the purpose of garbage collection is not to reuse reclaimed space, but to reduce the real memory requirement for the same virtual memory usage.
    If a system’s real memory is heavily used, garbage collection may cause a lot of paging activity and hence, last longer and prevent other processes from running in addition to Firefox itself.
    Hence, the largest benefit does not only come from doing the collection in short slices but also doing it when disk I/O activity is low and generating a moderate paging activity. That means monitoring the system. Regarding the length of the slices, monitoring Firefox itself can help determining it. There’s no problem locking the keyboard or a video if they’re not used and you can jump ahead with GC during that time.

    Reply

Post Your Comment