Main menu:

Site search



Did you write that code three times?

The code isn’t good enough until it’s been written three times.

I have no idea where that adage comes from, but I love it. I could’ve sworn I saw it in the preface to How to Design Programs, but I don’t see it there now. Wherever it was, it rang true to me, so I want to encourage everyone to write their code three times.

There is of course, nothing magical about the number three [1]: I can write hello world well enough in one try, and some particularly complex system might benefit from six rewritings or more.

But for regular challenging work, three often seems to be about right. For nontrivial work, clean design is essential for maintainability, correctness, security, and performance. And creating a clean design ahead of time is generally impossible. There are too many problems to solve at once and too many unseen obstacles. Instead, programmers need to create a rough design, then write some rough code, then see a better design, then revise to some better code, until it’s good enough.

If you take this idea seriously, it may change the way you work.

You might become more patient, expecting your first try not to be good enough. Maybe you intentionally create your first try as a rough draft that doesn’t implement everything and won’t really work, just to see how the design comes out.

If you’re learning to program, or learning new languages or domains, you might take extra care to write everything at least three times at first, so that you can learn more quickly what good design is and how to find it.

If you’re sending your code for peer review, you might not expect it to pass until you’ve written it three times. It might make sense to put it up after the second time, so you can incorporate peer feedback into the hopefully good enough third iteration.

In production work, sometimes there isn’t time to write the code three times. So you can write things only one or two times, but there is a price. For J├ĄgerMonkey, David Anderson wrote the base JIT twice: first the “JM1” compiler that used pre-fatvals 32-bit jsvals and got declared a prototype, then the “moo” compiler that became the final version. The “moo” design was noticeably better than “JM1”, and was certainly good enough to ship in a high-quality product, but it did have its flaws. In particular, the requirements for register allocation were not really understood until the very end of the project, and got put together in a way that was workable, but bug-prone and difficult for programmers to understand and use correctly.

A serious risk to watch out for is shipping research projects or prototypes. Sometimes a person builds something very cool, which they get excited about and want to ship right away. Multiple times I have seen shipping the first iteration of something end badly, or even go badly and then linger painfully. I guess it’s the best thing to do in some situations, like getting some system up quickly for a startup, but it often seems to go poorly with established products. [2] In any case, you should definitely know that you are shipping not-good-enough code and plan for the results.

The difficult thing about all of this is that the problems with code that hasn’t been written enough times tend to come out much later on. Here and now, the code seems to work. Only much later, possibly over the next five years, come the problems and the pain. And there’s no way to really see how all this works without learning that the hard way over the course of a few years.

Including a few minor revisions, I wrote this article once. ­čÖé

[1] Of course, the number three is magical in its deep embedding in human cultures and minds.

[2] Time horizon is important: if the code is to be used for a day, of course it can be low quality; if you’re going to support it for ten years, it’ll be a long ten years unless it’s very high quality. Complexity and debuggability, too: throwing together a new GC in one try is not recommended.

New SpiderMonkey blog!

There is now an official SpiderMonkey blog!

Various SpiderMonkey developers blog about various things, so we wanted to create one blog that you can follow for everything SpiderMonkey. Personal blogging will continue, but we’ll link to SpiderMonkey posts from there. It’s already got posts on accessing variables, incremental GC, and IonMonkey.

[Update 11:00am 10 Oct 2012] The SpiderMonkey blog is on the Planet Mozilla Projects feed, under the name Mozilla JavaScript. The projects feed is distinct from the Planet Mozilla people feed, so you won’t see the blog on the people feed.

Incremental GC now in Firefox Aurora

We’ve been holding our breath for a while on incremental GC (IGC), but now we can exhale: it’s currently in Aurora (Firefox 16) and working well. IGC comes from months of hard work by Bill McCloskey, who designed and implemented the incrementalization, and also spent a long time tuning performance and fixing the many subtle bugs that inevitably come up when you try to change the memory management of a complex system. Huge kudos to Bill for carrying through such a tough and important project.

Simply, incremental GC means fewer noticeable pauses. If you run Firefox with not too many tabs (like I do), then your GC pauses are probably around 100-200ms, which is a very noticeable glitch. If run with hundreds of tabs, then you might see pauses more like 500ms, which feels like a freeze. Incremental GC breaks up that time into 10-20ms slices, which are generally not noticeable, and spreads them out over time.

So this is a huge improvement, but keep in mind there is no perfect fix for GC pauses. For any incremental GC system, if memory is being allocated too fast, incremental GCs will not be able to keep up, and the system will be forced to pause for a full GC in order to avoid running out of memory [1]. But for the most part, IGC seems to be doing the trick, so if you’re still getting bad GC pauses, please let us know so we can fix them.

Another thing to note is that technically what’s just reached Aurora is incremental marking. Our GC has two phases, mark and sweep, and the sweep phase is not incremental yet. The last slice of GC, which includes sweeping, seems to often take 50 ms or so, which is nearly negligible for many purposes but would still disturb smooth animation.

Fortunately, Bill and Jon Coppeard (who recently joined the JS team working out of London) are making rapid progress on incremental sweeping as well. Bill is also continuing to tune IGC to keep pauses short on various machines and under various conditions.

[Update 3:11pm 20 Jul 2012: Bill reports in bug 775365 that the current work is for incrementalizing sweeping of some kinds objects. Apparently sweeping each kind of thing incrementally presents its own issues so getting incremental all the way through will take longer.]

[1] Generational GC (also under construction) will solve this problem for programs that rapidly allocate many short-lived objects. If you’re rapidly allocating long-lived objects, you’re being very mean to your GC and it will be very mean back to you.

SpiderMonkey API Futures

“I have altered the SpiderMonkey API. Pray I do not alter it further.”
–Darth Mandelin

It’s time for a new API.

Why. The immediate trigger is that we’re building a generational GC. We need generational GC in order to greatly reduce GC pauses for short-lived objects and make games, apps, and audio/video run more smoothly. Generational GC means objects (and all other “gcthings”) will move during GC, which in turn means object pointers can change. That means client code like this won’t work:

  JSObject *obj = ...;
  // Define a property. This could trigger GC and move |*obj|.
  rc = JS_DefineProperty(cx, obj, "a", INT_TO_JSVAL(1), 0, 0, 0);
  rc = JS_DefineProperty(cx, obj, "b", INT_TO_JSVAL(2), 0, 0, 0);

In the current configuration, we use conservative stack scanning to get GC roots, so the above code is fine. Conservative stack scanning means we scan all words in the C stack and all registers, and if any word |w| has a value that could be an address within the GC heap, we assume that it is, and start marking using that as a root. In the code above, |obj| will be stored either on the C stack or in a register, so it’s a root.

The problem with conservative scanning is that we can never know whether |w| is really a gcthing pointer, or if it’s just an integer value that happens to be in that range. So with conservative scanning, we can’t update |w| to the new location. And that would mean no compacting GC and no generational GC.

So conservative scanning has to go. Without conservative scanning, we need to provide some kind of rooting or handle API, and require client code to use that API to tell the engine which pointers are gcthing pointers, so they can be properly updated.

Past experience is that C rooting APIs are miserable: the user needs to explicitly root and unroot each pointer used. In practice, it’s never right. Even if all our users could magically get it right, we probably couldn’t for SpiderMonkey itself and for Gecko.

It’s much easier with a C++ API. With a C++ API, we can define types like |Handle| and |Root|. Client (and SpiderMonkey internal) code can then just say |Handle| and such where it used to say |JSObject| and be correct. And any C++ compiler can help you make sure the rooting is correct.

The short story is: Generational GC => C++ API.

What. The strictly virtuous thing to do would be to take the time to have a lot of discussions and design a brand new C++ API that would remain stable for the future. But we are not in a position to do that this year: we need to ship a new compiler, two new GCs, ES6, and enhancements to the Debugger object. Designing and shipping an entirely new stable API would be way too much. Perhaps next year.

So instead, we’re going to:

  • Change the API to C++, meaning that we are free to use C++ features in the API, and future API users will need to use C++ to use the API.
  • Design a “pretty good, pretty stable” C++ rooting/handle API and incorporate it into JSAPI. This is bug 753609, so if you’re going to use the API, please go there, check out the current version, and give us your thoughts.
  • Make gradual changes to the JSAPI as needed, primarily to support Gecko, as we do now. Some of these may be to add C++ inline functions or things like that, which we have already started doing for small things to boost performance.

[Addendum May 9 5:22pm] Users who need a C API can use any of the existing source releases, including the in-progress js-1.8.7/js-10 based on the Firefox ESR10 source, which includes JM+TI. We might also do a future source release off Firefox 13 or thereabouts. Once generational GC gets going, JSAPI will be C++.

The MoCo Handbook for New Employees

Don’t you wish there were one?

That was my first thought upon coming across the Valve Handbook for New Employees. It has sweet comic book art and everything.

After skimming it, my next thoughts were along the lines of: 300 employees and no managers at all? Everyone just decides themselves what to work on? What kind of crap is this? Either Valve has a protected position in the market, so they don’t need to execute quickly on specific things, or there is a lot of structure, it’s just informal. (The manual actually does talk about temporary and informal structure.) I’m curious to know if either guess is on track.

But of course I also remembered that Mozilla is not so different. We have 600+ employees now (is it 700+ already?) and various kinds of managers, but there is still a great deal of autonomy, and everyone does generally get to/have to make a lot of decisions about what to do. And I think that is often hard for new people. So, to possibly make things slightly easier, I offer:

The MoCo Cheat Sheet for New Employees

(This is written by someone from engineering, so it may be engineering-centric, but I have to imagine it’s not terribly off-base elsewhere. Also, it’s not complete by any means.)

You’ll hear the word “open” a lot.

Mozilla is all about open. Open means you can talk about your work publicly. Anyone from anywhere might pop in and comment on your work. You can pop in and comment on any other project.

Open also means people won’t be telling you what to do very much, but rather expecting you to figure out how to contribute. Along with that, the org chart is never kept up to date, and is probably more of a dag or hypergraph than a tree. This is exciting and fun, but can also be challenging to navigate.

Becoming Effective. As a new employee, one of your first priorities is becoming effective at your basic job. For example, if you are on the JS engine team, your job is to fix bugs and add features to SpiderMonkey. Fixing SpiderMonkey bugs is hard, so it will take a lot of practice and learning to become good at it.

There are all kinds of new procedures, skills, and/or codebases to learn, but what’s special about Mozilla is that you need to learn how to decide what to work on. Even if you are producing top-quality code at maximum speed, the value of that code is still entirely dependent on how relevant the project is.

What to work on. It’s not a total free-for-all. You will probably get assignments. But the assignment may be very general, like “make GC not suck”. So you’d need to figure out what that really means. And you’ll probably get way more assignments run past you than you could ever do. So even if you try to just do those, you’ll have to choose. And quite often assignments are more offered than, well, assigned.

Why don’t we just tell people what to do? Speaking for myself, one reason is that I’m busy with all kinds of stuff, and I don’t really have time for that. But more importantly, I don’t know everything, so I’d really like your ideas and your help in making decisions. And most of all, I find that the results are far better when people choose their own projects–and choose things that they are fired up about, whether it’s because they’ll get to learn, they’ll get to feel badass, or for whatever reason it’s something important to them personally.

So how do you figure out what to do? In one word, listen. Talk regularly with people in and around your area to find out what problems need solving, bugs need fixing, enhancements are needed. In two words, listen intelligently. Simply aggregating everyone else’s opinion is OK to start with, but I’d really much rather see everyone else’s opinion blended with your own unique point of view. Plus you’ll want to pick out the projects that you currently have the skills to get traction on and that you’ll find interesting. And you’ll need to figure out who really knows what’s worth working on vs. who everyone else thinks knows what’s working on vs. who thinks they know what’s worth working on but really doesn’t.

That’s how I started out: at first Taras Glek gave me a bunch of ideas, and I tried to figure out which ones he needed most and which ones would be good for me to start off with and learn from. Over time, other people started asking me to do things, so I did those things too, and started talking to and hearing from more people. I heard about TraceMonkey, and I was really excited about it but I didn’t get a spot on the project right away. (After all, I knew next to nothing about SpiderMonkey at the time.) But I was patient and eventually TM reached a stage where it needed people to fix bugs and performance faults, so I jumped in. And then I started hearing more about TraceMonkey and JavaScript, and I got more ideas and opportunities.

One lesson from my experience is that figuring out what to work on is pretty hard. So if your first few assignments seem to turn out not to be of much value, don’t sweat it. (Often new people are offered low-priority stuff or weird ideas so they can learn without blocking ongoing critical work.) Just keep talking, listening, trying things out, and learning.

Also: watch out for tar pits. There are projects out there to work on that are ill-defined, or that are popular to talk about but not really useful, or that will have an ever-expanding scope, or that have been tried 3 times and have always failed. You want to work where you will have maximum impact, not minimum. So if you find yourself in one of these, call for help: get out or get someone to help you get out.

Becoming Visible. I think visibility is important in pretty much any software company bigger than Dunbar’s number, but in such a fluid environment, it’s even more important. If you’re after external rewards, your managers and peers need to know about the great stuff you’re doing. If you’re after getting to work on the big cool projects and such, people need to know who you are and what you can do.

You don’t need to worry about becoming super-visible immediately, but you can start taking steps to improve your future visibility right away:

  • Starting a blog is an excellent thing to do. You may think you don’t have much to write about. But you do: your work and your experiences at Mozilla. There is almost certainly someone out there who’s interested. I started this blog talking about abstruse aspects of program analysis, and even that found an audience.
  • Talking to people regularly in person or on IRC is also great, of course. “What are you working on” is a good conversation starter and likely to be reciprocated.
  • Asking people for help automatically lets them know what you’re working on.
  • Helping other people gives them a chance to see what you can do.

There are also of course all the mailing lists and newsgroups and Yammer. I have the sense that a lot of the talk on there is not that productive so I’m hesitant to recommend spending more time on them to a new person, but YMMV.

Doing great work does also get visibility. One nice thing about our fluid organization is that although if you are just doing great reviews or a great job on some series of bugs it’ll get you visibility only with your peers, that actually counts a lot. Doing great work on a really important project gets lots of visibility, but that’s a total duh.

Being Visible. The common notion of visibility is mostly about telling other people about your accomplishments (which for us extends also to your capabilities), but because we are so open, there is another side, which is letting people see you working.

We are supposed to be “all open and stuff”. It can be intimidating to work out in public where anyone could see you fail or criticize you. I highly recommend responding by embracing it. In Bugzilla, use the assignee field to show what you’re working on. Post half-baked design ideas before you start coding. Post WIPs (works in progress) to let people see your crappy incomplete code. If someone asks you to do something and you don’t have time or think it’s a bad idea, say so right away.

The advantage of this is that you don’t have much to worry about. No one’s going to discover what you’ve been working on for the past 2 months and criticize you for wasting your time, because they would have been able to give you feedback right away. No one’s going to complain you’re not working on their favorite bug, because they can either see that you are, or you’ve told them you’re not. If it’s all in the open, and no one’s complaining, it’s fair for you to think you’re on the right track.

Just to make sure you don’t think I’m (totally) crazy, I should point out that there are times to be less open. When you’re working on a proposal to change procedures or do a crazy project, or a presentation, or something like that, it does make sense to get feedback from a small group before taking it public. And that is in fact OK around here.

Becoming Influential. You probably have all kinds of ideas about how to make the web better, or the JS engine, or Bugzilla, or our review process. That’s excellent.

But don’t be too surprised if at first most of your ideas are met with skepticism, misunderstanding, refusal, or are just ignored. There are way more ideas out there than there are people to work on them, so everyone already has 35 great ideas they’d love to try. They’d have to decide that your new idea is better than those 35 in order to think about it all that much. Or maybe they’ve already heard that idea and they rank it #67, so they’re not that motivated to think about it again. Getting ideas heard can be hard.

But also, don’t be discouraged if your ideas don’t seem to move people very much. It doesn’t mean your ideas are bad. It doesn’t mean no one’s ever going to listen to them. It does mean that if you want to be heard you’re going to have to rise to the challenge and work at it.

The easiest way to get more attention for your ideas is if you have “open source street cred”. If you are new, there is of course a good chance that you don’t have any yet. But as you become effective and visible, you will get that street cred and more chance to be heard.

What to do in the meantime? I don’t have a recipe. I recommend just to keep trying. That’s also why I said not to get discouraged. You can try an idea on different people. Maybe the first 5 are not too interested but the 6th has time and wants to work on it. You can try it over time. Maybe when people first hear it it is unfamiliar and weird, but after talking with you about it over time, they come to see its merits. You can write code or do some experiments to test the idea and show how it might work. You can change and refine the idea to see if different versions get more attention. If you keep trying and pay attention to what works and what doesn’t, you will gain skill in promoting your ideas.

One thing I think is clearly effective in getting heard is service. If you help other people solve problems, make their jobs easier, or help them get their ideas heard and implemented, there’s a good chance they’ll be more inclined to listen to you and help you out. That can go a long way even before you have any street cred.

Now you tell me. If you’re effective, visible, and influential, then you’ve made it. You’re not even remotely a n00b. It’s your turn to tell me (and other Mozilla managers and mentors) how you did it and what we can do to better help new employees.

A Personal Message

This is my personal comment on Brendan’s donation to Proposition 8. I believe Planet Mozilla is now just for Mozilla content. This is Mozilla content: it’s about the pain people have experienced, which necessarily affects their work and working relationships. I need to do this for myself: if I don’t, I’ll feel isolated and silenced. I hope it may comfort anyone who needs comfort.

In this post, I speak for myself: no one else has read it, edited it, or approved it prior to posting. These words are imperfect and they are mine alone.

tl;dr version: This hurts. If you hurt too, you’re not alone. If anyone needs someone to talk to, I’m here.

I have a strong connection to LGBT issues, including Proposition 8. My wife is bisexual. We have many LGBT friends. Opposition to discrimination and stigma has long been one of my most closely held political beliefs.

I have a strong connection to Brendan. Brendan has been one of my mentors at Mozilla. Brendan is one of the founders of a community I love being a part of, and a company I love working for. Brendan passed on SpiderMonkey module ownership to me.

Because of all these connections, when I learned last Wednesday night that Brendan donated $1000 to support Proposition 8, I experienced intense pain. I was first shocked, then disappointed, then sad, then anxious, then angry, and then back to disappointed. I’ve been able to think of little else at work or at home. I’ve had heartburn, tense muscles, and an anxious, alerted feeling every day.

If you were hurt too, you are not alone.

Various comments have led me to believe that some might not understand why people feel so strongly about this. It’s been suggested that this is just a difference of opinion and we have to learn to live with it. I agree that we have to learn to live with differences, but Proposition 8 is not just a difference of opinion. I can’t speak for everyone, but for me and my wife, Proposition 8:

  • by taking away the previously established right of gay couples to marry in California, took away a fundamental right from a class of people, and
  • defined gays as second-class citizens in the Constitution of California.

This may not be what Proposition 8 supporters intended. (I hope it was not.) But it is how it was received, and hopefully you can see how painful that would be: imagine that a law was passed saying that you (or a friend or family member) could not marry or stay married to the person you love.

There are probably people who have very different strong feelings about Proposition 8 or Brendan’s donation. If you do, I would be interested in hearing what you have to say. I invite anyone who is affected by this to open up and speak from the heart about what they believe and feel.

I’m sure there are Mozillians who are just as distressed that are not in a position to reach out or express their feelings at work. I hope knowing that you are not alone helps at least a little bit.

I’m available if anyone wants to talk about any aspect of this issue, or just needs someone to vent with, rant at, commiserate with, or whatever.

(Excepting commercial spam) I welcome all comments, public or private, regardless of who you are, what you believe, or the content or tone of your message.

Quietly Awesome

A couple weeks ago, I was in a lunch conversation about how Mozilla interviewers often expect, or even demand, that candidates act very enthusiastic during interviews. If a candidate doesn’t act enthusiastic, it tends to commented on negatively during the debrief. It’s understandable–we definitely want people who are excited about their work and about the Mozilla project.

But I think it’s also a big mistake that could cause us to lose out on great people. We have hired multiple people who didn’t sparkle with enthusiasm or excitement during interviews, but turned out to be incredibly skillful, dedicated, passionate developers. What appears to be lack of enthusiasm may just be introversion.

Introversion has been all over the blogosphere and in the media lately, probably because of a new book about it. What strikes me most is that contemporary Western culture is said to misunderstand and deprecate introversion, but introversion is especially associated with creativity:

The need for balance is especially important when it comes to creativity and productivity. When psychologists look at the lives of the most creative people, they almost always find a serious streak of introversion because solitude is a crucial ingredient for creativity.

Charles Darwin took long walks alone in the woods and emphatically turned down dinner party invitations. Theodore Geisel, better known as Dr. Seuss, dreamed up his creations in a private bell tower in the back of his house in La Jolla. Steve Wozniak invented the first Apple computer alone in his cubicle at Hewlett Packard.

The worst part of it all to me is that although I am an introvert myself, I have also harbored negative attitudes toward introversion, including my own introversion, and have even on occasion remarked negatively on “lack of enthusiasm” from interview candidates.

So, how about let’s not anymore look for overt “enthusiasm” as a key quality in people to hire? Some people’s enthusiasm just happens not to show in that way. Instead, I suggest looking for people with a passion for what they do, which can be seen sometimes by their words and emotional state, but more reliably by what they’ve done: deep knowledge of a subject, open-source contributions, student research, creative personal projects, or overcoming hard obstacles.

“What’s WebKit?”

Preface: I’ve kind of wanted to write some more personal and opinion-based blog entries for a while, but it’s been hard to start because I’m more nervous about self-expression than self-explication. And I think “Why would anyone want to read my random strange thoughts anyway?” And then when I think about some potential topics, it seems like I need to do a whole bunch of research and thought, that sounds like it would take a long time, so I don’t actually start to write. But there are things I’ve learned that I’ve wanted to share. So, time to give it a shot, start with something simple, and see how it works.

From time to time I read a blog post that says something like “One thing I’ve noticed about successful people is that they ask lots of questions, so they always keep learning. Ask questions.” All right, seems like sound advice, but the posts usually don’t bother to comment on why we don’t all already ask questions as freely as four-year-olds.

A story: When I started working for Mozilla, in late 2007, the first thing I did was go to a week-long meetup in Toronto [1]. Over dinner one night, conversation turned to ‘They do it like this in WebKit’ or ‘WebKit has this feature’ or some such. Just because I didn’t know any better, I asked “What’s WebKit?”

I saw a couple of cocked eyebrows. One of my dinner mates answered: “It’s the iPhone browser.” Me: “Oh”, thinking they’re thinking we just hired this guy and he doesn’t know what WebKit is? Well, I made it through dinner and I’m still here, so I guess it was OK.

The point is, it can be scary to ask a question. Is this a stupid question? Am I supposed to know this already? Does everyone else know it already? Will it make me seem like I don’t know anything? Will everyone think I’m wasting their time? To ask a question is to admit ignorance.

It makes me wonder if part of the reason “successful” people have a reputation for asking questions is that they have less to fear. If you’ve already founded a $100M startup or given a hit TED talk, you probably [2] worry less about anyone thinking you don’t know anything.

But maybe ignorance shouldn’t be embarrassing anymore: the universe of knowledge is huge and constantly expanding, so to a first approximation, no one knows anything anyway. If you don’t know fact X, maybe that’s just because you were too busy learning topic Y. And remind yourself that a bunch of smart, successful people ask lots of questions so you won’t look bad imitating them.

That’s what I did during a recent discussion where WebRTC came up. I didn’t understand the discussion, and I wanted to, but “WebRTC” sounded like a pretty basic thing that I was supposed to know about, being an web browser person and all, so I didn’t speak up the first few times I heard it. But when it came up again, I told myself smart people ask questions, took a second to calm myself, and said, “Hey, dumb question [3]–What’s WebRTC?” And I learned.

So I have a little suggestion: Next time you hear about something and don’t know what it is, and you’re feeling a little bold, feel free to ask “What’s WebKit?” And even more important, when you get asked that question, see not ignorance, but curiosity, and be sure to reward that person’s desire to learn, and their courage.

[1] The meetup was on “Mozilla 2.0”. Sounds pretty quaint today.

[2] Just speculation: I wouldn’t know.

[3] See what I did there?

Mozilla JS Development Newsletter 1/25/2012-2/29/2012

I waited a couple weeks after the last update to accumulate more cool stuff to talk about, and then I ended up having to wait a few more to get sick, get better, travel, give a talk, and then (partially) dig out from the backlog. Turns out there’s a ton of updates by now:


A big thanks and congrats to Bill McCloskey, who recently landed Incremental GC to Nightly! What was once a very noticeable 100-200ms GC pause every 10 seconds or so is now a sequence of 10ms hardly noticeable mini-pauses. Subjectively, it’s a huge improvement on the games and demos that I’ve tried.

There aren’t too many GC benchmarks out there to try to get ‘objective’ with, but there is Google’s Spinning Balls GC pause test. Spinning Balls animates a bunch of circles while allocating lots of memory (by running the v8-splay benchmark code during the animation). The test shows the distribution of pause times, and a dimensionless score that seems to heavily penalize long pauses, even if there are few. On Firefox 10, the pauses are about 70ms, and I get a score of 139. In Nightly, with IGC on, the pauses are mostly 30ms, and I get a score of 674.

You may have noticed that I said IGC makes pauses 10ms, so why do I get 30ms pauses in Spinning Balls? The main reason is that Spinning Balls actually measures the time delta between animation frames, and frames are 16ms apart for 60Hz animation. If a 10ms GC happens to overlap a frame refresh, then we’ll miss one, so the delta will be measured as 33ms. Secondarily, there are some short pauses associated with GC that aren’t incrementalized yet, which also makes us miss about 1 frame every GC or so. Specifically, finalizing objects (e.g., DOM objects) isn’t incremental, which can generate 20ms pauses or so. Also, we currently have to throw away compiled native code on GC, and recompiling can pause us for a few more ms. We plan to fix both of those limitations in followup work.

If you’re testing IGC on nightlies, one thing to watch out for is that add-ons (and potentially browser features as well) can disable IGC. You can check whether it’s enabled by going to about:support, scrolling to the bottom, and looking for “Incremental GC”. If it says “1”, IGC is enabled. If it’s “0”, please test disabling add-ons and/or file a bug so we can fix it.

Why does IGC get disabled? IGC relies on “write barriers” to help the main program and the GC cooperate. Binary JavaScript components need to implement write barriers for any special objects they create, or else IGC is unsafe. So, if the browser detects anything that wasn’t coded with IGC in mind and would cause a problem, it disables IGC.

If all goes well in the testing channels, and things are looking good so far, IGC will go out in Firefox 13.

[update 5:52pm 3/2]
Aargh, I forgot to mention, that’s not even all the GC work that’s been going on: Terrence Cole added write barriers for generational GC, and he’s about done refactoring the mark phase to support moving GC. So we’re just about ready to start implementing moving GC inside the JS engine.

New Hire

Please welcome Kannan Vijayan to the IonMonkey team! He’s working in the Toronto office, our first JS person in Toronto. He’s taken on property deletion for IonMonkey as his first patch.


Chris Leary has left the JS team to try his hand at a startup. During his time on the JS team, Chris got regular expressions squared away for Firefox 4, contributed to J├ĄgerMonkey, implemented JIT hardening for J├ĄgerMonkey, created InfoMonkey, fixed the usual pile of bugs, and wrote hilarious blog entries. His final mission was on the IonMonkey team, where he mentored Andrew Drake building the register allocator, and implemented function inlining and the all-important on-stack invalidation. Best wishes on your new venture!


Jason Orendorff landed ES6 for-of, which is a new iteration construct over values that also works with ES6 generators and iterators.

Jason has also been doing some very cool work to help TC39 specify some ES6 constructs. The draft spec for Set gives only one constructor, with no arguments, creating an empty set. Jason argued for adding a constructor that takes an iterable argument, did some measurements showing the corresponding one-argument form was popular in Python, got support from some committee members, and implemented it for SpiderMonkey.

Jason also did some experiments on Maps that iterate over their elements in the order they were added, a.k.a., “deterministic hash tables”. Most map constructs in most languages don’t specify an iteration order, presumably because defined order is expected to cost a lot in performance. Jason’s experiments found that deterministic hash tables aren’t really any slower, although they do use more memory. It’s a fairly promising result, and either way, having the data can only help the committee.


IonMonkey is now mostly doing optimizations and adding support for more JS features. They recently got a big improvement in their Kraken score. Check out their score on desaturate.

Super Snappy + Sandboxing

Brian Hackett’s got some interesting new ideas. First, he’s working on “Super Snappy”, which allows the browser to run UI and content on separate threads, so that if content lags (e.g., ilooping in JS), the UI can still respond. There’s a WIP patch in that bug.

Second, he’s got an idea for bringing sandboxing to Firefox. It’s just an idea so far but it looks promising to me.

Brian also landed “chunked compilation” a while back. Background: JM+TI uses types to compile JS functions to native code. If its assumptions about types are later invalidated, it may have to recompile the function. Say you have a program with 1000 small functions, and types get invalidated 10 times. That means recompiling 10 small functions, which takes negligible time. Now say you have a program of equal size but 1 large function. If types get invalidated 10 times, that means recompiling the 1000x-as-big function 10 times, which is very noticeable. Chunked compilation cuts large functions into pieces that are small enough so that recompilation is not a problem. This is particularly important for Emscripten programs.

JIT Hardening

Chris Leary landed JIT hardening for J├ĄgerMonkey. It’s impossible to know how vulnerable to JIT spraying we actually were before, but we should at least be much less vulnerable now.

Stuff that may affect you

Jeff Walden removed sharp variables. Sharp variables were a feature that allowed cyclic data structures to be serialized in a JSON-like format (using things like ‘#3’ as names for elements, so multiple things could point to them). But sharps were non-standard and not in use on the open web, so we took them out as part of our ongoing cleanup and simplification of SpiderMonkey code.

Jeff’s recent work on switching over our fixed-width integer types (e.g., uint32) to standard types (e.g., uint32_t) also unlocked the ability to get rid of a bunch of other old typedefs, so I’ve been doing that. So far, I’ve taken out JSIntn/JSUintn, intN/uintN, jsrefcount, JSPackedBool, and jsint. jsuint is next. If you’ve been using these, you’ll need to switch to standard types. Find and replace pretty much works. (By the way, I’d like to do something with JSBool eventually, but I’ll need to be more careful so that will take longer. JSBool is 4 bytes, while bool is usually 1 (but not a defined length), so switching JSBool to bool could break things. In particular, if the JITs call functions with JSBool parameters or return values, that would break. Also, MSVC 2005 compiling C doesn’t have ‘bool’. I’ll probably start by switching JSBool to be a typedef for bool, see if that works, and if time goes by without problems, then take out the typedef and just use bool.)

Mozilla JS Development Newsletter 12/07-1/24

I was holding off on updates around the holidays, when things were kind of quiet, but lots of stuff is happening again:

ECMAScript 6

Jason landed Simple Maps and Sets per the draft specification. Yay!

Speaking of which, Chris Leary blogged yesterday about how you can increase your badassery, stop him from harassing motorists on 101, and make all sorts of wonderful things happen by adding even more ES6 features to SpiderMonkey.


Incremental GC continues to move forward: Bill put a series of patches up for review last week, and about half of them have r+ now. I’ve been playing a bit with the larch branch (which has IGC), and it’s looking really nice, especially for games.

Generational GC is also powering up: Terrence has been adding write barriers for generational GC.


The IonMonkey infrastructure is complete, barring any needed changes that are discovered later. (Which of course has already happened: Chris came up with a much simpler way of doing on-stack invalidation.) So the team will start to shift focus to optimization, and they’ve already got a good score on 3 SunSpider benchmarks.


Jim (with some help from Jason) continues to firm up the new debugger API. They also created a github project, jorendb, which is a demo command-line debugger for JavaScript.

Stuff that might affect you

Simplifications and cleanups continue:

Jeff changed our integer types to use stdint.h: where we used to say uint32 (or theoretically even JSUint32 (shudder)), we now say uint32_t. The immediate reason was to fix a recurring bug where another header file had fixed-width integer types with the same names but slightly different definitions, causing occasional breakage when the different definitions crossed. But it’s also nice to just use the standard types and not have anything special to fix or learn about, even it if it does cost a _t, which was somewhat controversial.

Luke removed JSThreadData and JSThread. This means that JSRuntimes are now single-threaded, so if you want to use SpiderMonkey with multiple threads, the only supported way to do it is to make multiple non-communicating JSRuntimes. This is a great simplification for the engine and removes a bunch of sources of bugs.

‘Ms2ger’ did various cleanups in the engine, especially around header files and reducing the number of installed header files toward just the actual API headers (i.e., not allowing users to poke around in the guts of the engine through normally included headers).

Other Stuff

‘qjivy’ added a MIPS backend for our JIT compilers.

Tom Schuster fixed a bunch of bugs and added support to make "eval([...])" fast, like our existing optimization for eval({...}).

‘Adam’ rewrote part of the decompiler printer buffer in order to simplify one of our memory allocators.