Archive for the 'Uncategorized' Category

Converting a test to be e10s compatible

December 10th, 2015 | Category: Uncategorized

I was working on another bug the other day when I came across a disabled test. The comment for why it was disabled seemed like it might be low-hanging fruit for conversion to be e10s-compatible, so I took a look.

A couple of things leapt out to me immediately: the test isn’t using BrowserTestUtils and it isn’t using promises. So, the first step is to convert it to use BrowserTestUtils and Promise. Here’s the diff.

Now that we’re using promises and the nice helpers, let’s see why the test isn’t compatible. The first load is of a chrome:// URL. This is guaranteed to load in the parent, meaning that our first test, testXFOFrameInChrome can remain untouched. We can use its contentWindow with impunity.

The second test, testXFOFrameInContent is the reason this blog post exists. It loads a content document, meaning that its browser will be a remote browser and the contentWindow would be a CPOW. Instead of using it directly, we can use another helper: ContentTask. This will let us perform an operation in a given browser‘s associated child process and fulfill a Promise when it finishes. Here’s that change.

We’re almost done! However, if we run this version, we’ll still fail due to a small difference between the function is in browser-chrome tests and in ContentTask.jsm. The version in ContentTask.jsm is stricter than that of the normal tests, so when we check the return value of document.getElementById against undefined we get a test failure. We fix that by checking against the correct value (null). Another thing to note is that ContentTask decompiles and recompiles the callback function in the child process. That means that you can’t use any global variables or functions. You can pass a single parameter to the function, though, as long as it can be structured cloned. The content window is available through the content global in frame scripts.

One last note about this test. When I was writing it, I used BrowserTestUtils.loadURI function which returns a Promise. However, that promise doesn’t resolve when the URI is loaded, only when the load starts. So I ended up wasting a bunch of time before realizing that I needed to use BrowserTestUtils.browserLoaded after it to ensure the second test runs at the right time.

Now, our test passes and we can get it reviewed and checked in!

Comments are off for this post

The old HTML parser is dead! Long live the HTML parser!

August 12th, 2013 | Category: Uncategorized

Many years ago, I got my start contributing to Mozilla by working on the new HTML parser. At the time that I was starting to contribute, there were big changes happening in Mozilla-land, most notably AOL laying off basically all of its Netscape engineers. With that stroke, the available manpower working on Gecko shrank to a shadow of its former self, leaving large portions of the codebase unowned. Naturally, this meant that the least sexy portions of the code received even less attention than before and the parser, which had not been actively worked on (except to fix critical bugs) fell even further onto the back-burner. The reasons for this abandonment were not only due to the fact that HTML parsers are not the most exciting thing in the world to work on, but also because of a few other factors:

  • The code was very poorly documented, with many comments being incomprehensible. Understanding the code required reading both the immediately surrounding code as well as most of the rest of the class. In addition, there were often many ways of stating a simple constraint (e.g. HTML element A can contain HTML element B), each one with its own subtle side-effects.
  • The algorithm used was non-deterministic with regards to HTTP packet boundaries! When HTML5 was introduced, this one factor was the reason that Ian Hickson ignored Gecko’s algorithm entirely when specifying what should happen.
  • The coding style used in the relevant source files was unlike any other code in the tree (as well as being both terse and overly verbose at the same time).

With this in mind, when Mozilla made the decision to switch to an HTML5 parser, we decided to go with a new parser entirely. Henri Sivonen was kind enough to figure out how to hook his existing implementation up to Gecko (including an automatic translation from Java to C++). We were able to switch over to this new parser for everything except parsing the magical about:blank page. Henri has been working on fixing that; however, in the meantime, we’ve been schlepping the old code around with us. The other day, I decided to remove the parts of the old parser that were no longer useful to its remaining use (that is, everything that didn’t simply output a blank document).

The result of all this is that with bug 903912 landed, the old parser is basically gone. The most complicated bits have been torn out and we now only have one HTML parser in the tree.


Preparation and Learning French

June 26th, 2012 | Category: Uncategorized


There are books written on how to learn languages and how to prepare for a trip to France, so I won’t spend a lot of time on my own approach. Having said that, though, there are a few things that are worth mentioning explicitly here. The most important thing, though, is to have fun with it. Learning a language gives you a means to speak to an entirely new world of people and a new way of understanding cultures (both how different and how similar they can be).

My experience with French started when I was two or three years old. My parents (who already had started teaching me a few French words and songs) enrolled me in a class. Unfortunately, we moved when I was still pretty young so I ended up losing a lot of what I had learned. That being said, I think that some of the grammatical constructs and accent stuck with me. Furthermore, throughout my life, my family has used French in certain circumstances. For example, when my mom asks me how much money I have on me (if we’re going into a shop), I respond in French to avoid announcing to the world at large that I have $100 on me.


When learning a new language, there are two separate axes that are useful to think about: comprehension and speaking. Some things, like learning new vocabulary, will improve both at the same time, but I needed to work on them separately. I found that practicing listening to French via podcasts (five to ten hours a week) helped me a lot here, basically any French will do. I found RFI’s journal en francais facile to be very helpful, as well as a fun way to keep up with world events. I also subscribed to a couple of RTL’s podcasts. Once you achieve a basic level, watching movies and TV shows helps to teach you some of the slang (argot) as well as helping you acclimate to how you’re likely to hear French spoken on the street. The French that you hear spoken by radio personalities is very clean, very formal, and very clearly enunciated and it will take a lot of time to get used to the pace and grammatical shortcuts in casual French. If you do watch movies, try to do so without subtitles, it is very easy to read the subtitles and not listen to the French, defeating the purpose. If you’re having too much trouble understanding a movie with subtitles, or you can’t disable them, concentrate on ignoring them as much as possible.


There’s a secret method for learning how to speak a new language, but don’t tell anybody: practice! Throughout the two years between my second trip to Paris and my six month voyage I did the following:

  • Read French articles aloud (occasionally recording and replaying them to make sure my accent was somewhat correct).
  • Found French meetup groups, as much as my schedule permitted.
  • Spoke to myself a lot: in the shower, in my car.
  • Maintained an internal French dialog of what was happening in my life and what I was seeing, which is a great way to learn new vocabulary (e.g. when I was watching Harry Potter, I’d say to myself: “ok, maintenant, il utilise son … wand (sa baguette) … pour se defendre!”).

Every little bit helps. Speaking out loud is very important: even if you have the best vocabulary and perfect grammar, you need to get used to saying unusual sounds and unusual combinations of sounds. For some words (like peripherique), I would actually sit down for and, for minutes at a time, say the word over and over until it rolled off my toungue.


Where did all of this work get me? When I got to France, my comprehension had gotten quite good. My French friends would have to take a little care to be sure I could keep up, but not to a point where it was onerous for them. Also, once I got to France and was surrounded by French, my level improved in leaps and bounds. As for fluency, I could speak and get my point across, but with a lot of grammatical mistakes and a lot of hesitation. The cure for this was simply to speak more and to force myself to go faster (more on this later). Overall, I would say that my level had moved from low intermediate to high intermediate.

Once I got to France

It was very important for me that I spoke French in France. This is actually not as obvious as it sounds: a lot of people speak English, and will take the opportunity to practice with you. Don’t let them! It is too easy to sit back and speak English, which won’t help your French at all. If someone responded in English (usually under the guise “Oh, I speak English, it’ll be much easier for you”) I would respond, “Merci, mais je prefere parler en fran├žais” (Thank you, but I prefer to speak French). While it was true that it was easier for me to speak in English, I was willing to struggle as much as necessary. So, don’t take the easy route out!

Final note

I don’t know if I’ve made it clear enough, but the one constant here is that it will take time, effort, and patience. Don’t get discouraged! As I wasn’t taking courses, I didn’t have any tangible evidence of my progress, except that occasionally, I would say a complex sentence without stuttering or mistakes, stop and realize “Wow, I just said that!” which, in a way, is a better yardstick of progress than a simple grade.


6 Months in Paris

June 25th, 2012 | Category: Uncategorized

What happened?

I’m sitting here in Mozilla’s San Francisco office after having spent 6 months in Paris, working in our office there. While I was in France, I basically lived my life in French and spent as little time as possible speaking in English. It was an amazing time, and I’m sad to be back in California!

As an American in Paris, I noticed a bunch of interesting things about living in France both in terms of having to speak in a second language and differences about our cultures. So, I am going to spend a couple of blog posts talking about my observations and experiences. I was very lucky to have coworkers and friends in Paris who were accepting of my mistakes and willing to let me bumble my way through my trip (and patiently explain when I made mistakes, both culturally and linguistically). While my experience was in France, I think that a lot of what I learned can be applied to other countries that don’t use English as their primary language.

PAQs (Possibly Asked Questions)

Why Paris?

My family has a special relationship with France. My parents both speak French (even though they’re both American) and both lived in Paris for an extended period of time. I have cousins and uncles who go to France regularly. As a result of this, as I grew up, I found myself surrounded to some extent by French and French culture. For example, my parents have been trying to teach me French ever since I was three. On the strength of this familiarity with French, I’ve been to France (and more precisely to Paris) several times, each of them for a month and every time I’ve been there, I’ve found myself saying to myself, “I would really love to live here.” So, when I realized that I might have the opportunity, I jumped at it.

Where did you stay?

Going to a foreign country for six months is a little awkward. It’s just enough time that it might not make sense to keep your apartment in your home city, but not quite long enough that it’s worth the hassle of finding a new one when you get back (especially in San Francisco). Fortunately, I have an “aunt” (in a very loose sense) who avoids winters in Paris by living elsewhere in the world for four months and who doesn’t like to have an empty apartment, so while I was in Paris, I was doing double duty working for Mozilla and housesitting (it’s a hard life, I know!).


All Hands

September 23rd, 2011 | Category: Uncategorized

Last week was the Mozilla Corporation All Hands meeting. This means the entirety of Mozilla (or, at least our employees would could make it) packed into the convention center in San Jose for a week of fun activities and meetings with coworkers from around the globe. We saw neat demos of new technologies and got to apply pressure to people who owed us reviews.

Sitting in the convention center for the first keynote, I couldn’t help thinking back to the first All Hands that I attended. I think it was the second Mozilla Foundation (not yet Corporation) All Hands, the first having been held before I got my internship. I found this snippet from an e-mail about the event:

Date: June 21, 2005
Subject: All Hands Meeting Information – tomorrow at 12 Noon, 650 Castro Street

At 1:00 PM we will start the meeting. Right now there is no formal agenda, but Mitchell will talk and there will be some updates from various members of the organization.

Note that 650 Castro Street didn’t mean the 3rd floor of the building as it does now. Instead, some company had rented us a single room on the ground floor that was being used to hold board meetings. The 25 or so of us sat around in the room to listen to people such as Mitchell talk about the direction of Firefox and the Foundation. Beyond that, I don’t really remember what was discussed in too much detail, but it was a fun time. After the meeting, we all went go karting in Menlo Park (a race that Josh won) before returning to the office the next day to get back to work.

It’s funny to me that the logistics e-mail I found was sent out a day before the event and there was no real set schedule other than, We’re going to talk for a bit. In fact, there weren’t even enough of us that we had to get buses, instead we simply piled into various coworkers cars.

The All Hands meeting last week saw nearly 600 people come in from all around the world. The first planning e-mails went out months in advance and we filled an entire hotel for a full week. Furthermore, the entire first day was filled with keynotes and speakers had a fixed amount of time in which to deliver their messages. Every evening had an activity associated with it, including shuttles to transport everybody to and from the events.

What a change it’s been. I can’t wait to see what next year’s events will hold.

Comments are off for this post

How I got started at Mozilla

March 23rd, 2011 | Category: Uncategorized

Over in the newsgroups, there’s a raging discussion taking place about how difficult it is for new contributors to contribute to Mozilla. As part of this sort of discussion, people tend to post “my first patch” stories. Rather than reply there, I figured I’d take this chance to update my blog ­čÖé

When I was in high school, I remember hearing about the launch of Netscape 6 and how it was “open source.” Being a budding computer programmer, this was an extremely exciting development. For the first time, I was going to get to see how the “real programmers” did things. So, I downloaded the source code and started looking through it. I’ve always been a “look first, jump later” type person, so even once I’d downloaded the source code and found bugzilla, I lurked. I think I lurked for close to half a year, following bugs that I was interested in by keeping them open and reloading once every few days. Occasionally, if I came across a bug I thought I could fix, I’d write a patch locally, and wait for the assignee to fix it himself and compare our approaches.

The summer after I’d graduated from high school, I was looking through the source code and came across the “htmlparser” top level directory. Now, this was something I could get my brain around. Somehow, I linked that directory to the HTML: Parser component in bugzilla (probably due to a code comment) and started looking through bugs, commenting in a couple of them when I had something useful to say. After a little bit, I found bug 154120, a small bug in “view source.” After having read through the HTML tokenizer code for a while, I’d seen a few things related to “view source”, so I figured out where the bug was and fixed it! I’d been watching other Mozilla developers working in bugzilla and had observed some of the magic incantations (“diff -u” and attaching the result to the bug), but as I was entirely unfamiliar with the process as a whole, I didn’t realize the importance of requesting “review” (as I recall, my hope was that the current assignee of the bug would see the patch and do something useful).

And, nothing happened. I had CC’d myself to the bug and attached the patch, but had no idea of how to advance my patch further. So, I drifted away and went to college for a year and waited until the next summer, entirely forgetting about the patch I had attached. You can imagine my surprise, then, when out of nowhere, I got several e-mails from bugzilla telling me that some guy named Boris Zbarsky had not only seen my patch, but updated it to the current trunk and found it “exactly right.” I still remember the surge of adrenaline kicking in on receiving the e-mail for comment 19 in that bug: “My code is in Mozilla!”

What else could I do? I was instantly hooked on that feeling. Having had success with one view-source bug, I found another one and commented in it. Fortunately, Boris was already CC’d on that one so he could respond and away I went: another (small) bug quickly dispatched, another rush of adrenaline. With Boris to pester on IRC when I had questions and to review my patches (I can only imagine how much I tried his patience, especially in those early days) I was off and running to becoming a developer on Mozilla.


xpcnativewrappers=no going away

February 11th, 2010 | Category: Uncategorized

Way back in 2005, jst, brendan, and bz combined to implement XPCNativeWrappers (or, as I’ll refer to them, XPCNWs). XPCNWs have the somewhat bizarre behavior that they incompatibly change the view of an object from an extension’s perspective. For example, an extension that grabs a content window’s “window” object and tries to call a function on it that is not part of the DOM, would work before XPCNWs, but not after.

Not having concrete data on how many extensions would be affected by such a change and erring on the side of caution, we implemented a way to opt out of XPCNWs. Basically, if your extension broke because of XPCNWs you could ask Gecko to give you the old, insecure, behavior. The intent at the time was to let authors flip the switch off, then go back to their extension and fix things until they could turn on XPCNW support.

Now, in order to support a more secure and easier to use platform, it is necessary to remove support for xpcnativewrappers=no. This will mean some work on extension authors’ parts:

  • If your extension relies on xpcnativewrappers=no, your extension will stop working correctly when bug 523994 lands.
  • In order to fix it, you should identify the parts of your extension that require direct access to content objects. This should be limited to three cases:
    1. If your extension depends on XBL bindings attached to content objects (namely, being able to call functions or get and set properties created by the XBL binding) then you will need to use the .wrappedJSObject of the XPCNW.
    2. If you need to call functions or access properties defined by the content page (for example, if you wrote an extension to add a delete button to gmail and there’s a window.delete() function defined somewhere).
    3. See The devmo page on XPCNativeWrappers for more.
  • Note that if all you do with content objects is use DOM methods, then everything should simply continue to work (and you shouldn’t be using xpcnativewrappers=no anyway)!

I’ll write a second post soon to describe the what and why of XPCNWs and .wrappedJSObject.


Working on the JS engine

June 30th, 2009 | Category: Uncategorized

Especially working on old branches without some of the nice debugging helpers that jorendorff has implemented, sometimes I look at my gdb session and just know that I’m working on the JS engine:

(gdb) p $.atom
$11 = (JSAtom *) 0xb194f984
(gdb) p/x *(JSString *)((int)$ & ~7)
$12 = {length = 0x20000004, u = {chars = 0xaf434970, base = 0xaf434970}}
(gdb) x/4ch $.u.chars
0xaf434970:	97 'a'	98 'b'	99 'c'	100 'd'
1 comment

Proving Difficult Assertions?

May 15th, 2009 | Category: Uncategorized

Several times over the past couple of weeks, I have wanted to make some sort of assertion about an invariant in Mozilla. Now, some invariants are easier to prove than others. For example, if I wanted to prove that a particular variable (such as nsHTMLDocument::mWriteState) only ever has one of four values, I can enlist the C++ type system and an enum to help check and enforce that for me. To me, the important point is not how the invariant is proven, just that it is proven.

For a more difficult example, I might want to show that, in a web page’s <script> tags, all scripted functions are allowed to access their scope chains. This might seem like a vacuous invariant to prove, but given that whether we either do or don’t perform security checks depends on this invariant, it seems worth the exercise. However, if you get the point and don’t want to slog through a fairly involved example involving JS and caps you might want to skip to the last paragraph for the punchline.

Unlike the first example, the C++ type system cannot help us here. Instead, we first make an assumption: any time we compile JS code, the principal we compile that code with is equal to the principal of the scope chain (you can check this assumption by reading nsScriptLoader::EvaluateScript). Now, we note that in nsScriptSecurityManager, to compute the privileges (principals) of a function, there are two cases: functions are either cloned (in which case they inherit the principal from their parent) or not (in which case, we use the principal of their script). So, we can say that our invariant holds true for any non-cloned function (thanks to our assumption earlier).

Now, what about cloned functions? Well, since we only care about functions in the <script> element, we only have to see how these functions are cloned. Scanning through js/src/*.cpp we can see that the parent argument to js_CloneFunctionObject is always cx->fp->scopeChain. Great! To finish, we go check what a scripted function’s scope chain is set to when it’s called. A quick glance at jsinterp.cpp verifies that the function’s parent is used as the scope chain, and we’re done. For extra credit (and to make this particular example useful) you can also prove that functions cloned by jsinterp.cpp keep the same principal (which, with what we’ve shown here, tells us that it’s OK to not do security checks when looking stuff up on the scope chain).

Whew, so that wasn’t so bad, right? (Hah!) I have the advantage of having worked on this code for the past 3 years, I made some gigantic assumptions (that I can back up) and was able to do most of that without cracking open my editor. Furthermore, I don’t think that it’s possible to usefully put that invariant (and why it’s true) into the source code, either as a comment or as part of the code. I say this because there are millions of these assertions, some that cross module boundaries, like this one, and some that only hold true for single functions. As a programmer working on this code, fixing bugs and adding features requires figuring out which invariants are being broken, or which ones I might affect by adding new code.

And finally, this brings me to my question. How can we write code that makes answering a question like “can a scripted function always access its scope chain?” easier? Is it more comments? Better variable names? I ask because these two examples were relatively easy compared to some of the invariants I’ve been dealing with lately, and when you’re writing code in C++ (and when that code implements a security system for a web platform) they don’t get less important.

Comments are off for this post

A brief history of XPConnect

April 23rd, 2009 | Category: Uncategorized

XPConnect has been around since the beginning of Gecko. However, at the beginning, it was only the bridge between JavaScript chrome code (chrome code is the code running in the browser itself, as opposed to content, web page code). In those early days, XPConnect-using JavaScript code looked a lot different. Because XPConnect didn’t know about nsIClassInfo properties had to be addressed through their interfaces. So instead of saying docShell.QueryInterface(Ci.nsIWebNavigation).canGoBack one might say docShell.nsIWebNavigation.canGoBack. Obviously, this wouldn’t work for web pages, so there had to be another solution.

That other solution was midl.exe. This was a Windows-only program that generated gobs and gobs of C++ glue to connect DOM-facing objects to JavaScript from the IDL. The generated stub code was put in dom/src/* and weighed in at over a megabyte. Because midl only ran on Windows, every time a developer wanted to change an IDL file, he would have to find a developer who ran on Windows to generate new stubs for the affected interfaces before proceeding.

In 2001, John Bandhauer (jband) and Johnny Stenback (jst) started working to teach XPConnect about nsIClassInfo and to replace the midl generated code by simply calling functions through xptcall. This resulted in a significant codesize reduction and allowed people to change idl files, even if they didn’t run Windows. Furthermore, in order to support some of the weirder aspects of the DOM (such as setting window.location changing the currently shown page) the nsIXPCScriptable interface was fleshed out, allowing any C++ code to interact nicely with JavaScript code, instead of just the DOM code. For example, the storage code in Gecko now provides a nice enumeration API for JS to use for-in loops to iterate over query results. Without the magic of nsIXPCScriptable, JS would be forced to use uglier function calls.

Comments are off for this post

Next Page »