Dec 10

Converting a test to be e10s compatible

Category: Uncategorized

I was working on another bug the other day when I came across a disabled test. The comment for why it was disabled seemed like it might be low-hanging fruit for conversion to be e10s-compatible, so I took a look.

A couple of things leapt out to me immediately: the test isn’t using BrowserTestUtils and it isn’t using promises. So, the first step is to convert it to use BrowserTestUtils and Promise. Here’s the diff.

Now that we’re using promises and the nice helpers, let’s see why the test isn’t compatible. The first load is of a chrome:// URL. This is guaranteed to load in the parent, meaning that our first test, testXFOFrameInChrome can remain untouched. We can use its contentWindow with impunity.

The second test, testXFOFrameInContent is the reason this blog post exists. It loads a content document, meaning that its browser will be a remote browser and the contentWindow would be a CPOW. Instead of using it directly, we can use another helper: ContentTask. This will let us perform an operation in a given browser‘s associated child process and fulfill a Promise when it finishes. Here’s that change.

We’re almost done! However, if we run this version, we’ll still fail due to a small difference between the function is in browser-chrome tests and in ContentTask.jsm. The version in ContentTask.jsm is stricter than that of the normal tests, so when we check the return value of document.getElementById against undefined we get a test failure. We fix that by checking against the correct value (null). Another thing to note is that ContentTask decompiles and recompiles the callback function in the child process. That means that you can’t use any global variables or functions. You can pass a single parameter to the function, though, as long as it can be structured cloned. The content window is available through the content global in frame scripts.

One last note about this test. When I was writing it, I used BrowserTestUtils.loadURI function which returns a Promise. However, that promise doesn’t resolve when the URI is loaded, only when the load starts. So I ended up wasting a bunch of time before realizing that I needed to use BrowserTestUtils.browserLoaded after it to ensure the second test runs at the right time.

Now, our test passes and we can get it reviewed and checked in!

Comments are off for this post

Aug 12

The old HTML parser is dead! Long live the HTML parser!

Category: Uncategorized

Many years ago, I got my start contributing to Mozilla by working on the new HTML parser. At the time that I was starting to contribute, there were big changes happening in Mozilla-land, most notably AOL laying off basically all of its Netscape engineers. With that stroke, the available manpower working on Gecko shrank to a shadow of its former self, leaving large portions of the codebase unowned. Naturally, this meant that the least sexy portions of the code received even less attention than before and the parser, which had not been actively worked on (except to fix critical bugs) fell even further onto the back-burner. The reasons for this abandonment were not only due to the fact that HTML parsers are not the most exciting thing in the world to work on, but also because of a few other factors:

  • The code was very poorly documented, with many comments being incomprehensible. Understanding the code required reading both the immediately surrounding code as well as most of the rest of the class. In addition, there were often many ways of stating a simple constraint (e.g. HTML element A can contain HTML element B), each one with its own subtle side-effects.
  • The algorithm used was non-deterministic with regards to HTTP packet boundaries! When HTML5 was introduced, this one factor was the reason that Ian Hickson ignored Gecko’s algorithm entirely when specifying what should happen.
  • The coding style used in the relevant source files was unlike any other code in the tree (as well as being both terse and overly verbose at the same time).

With this in mind, when Mozilla made the decision to switch to an HTML5 parser, we decided to go with a new parser entirely. Henri Sivonen was kind enough to figure out how to hook his existing implementation up to Gecko (including an automatic translation from Java to C++). We were able to switch over to this new parser for everything except parsing the magical about:blank page. Henri has been working on fixing that; however, in the meantime, we’ve been schlepping the old code around with us. The other day, I decided to remove the parts of the old parser that were no longer useful to its remaining use (that is, everything that didn’t simply output a blank document).

The result of all this is that with bug 903912 landed, the old parser is basically gone. The most complicated bits have been torn out and we now only have one HTML parser in the tree.

13 comments

Sep 14

Security Checks and enablePrivilege in Gecko, part 2

Category: enableprivilege

The problems

In my last post, I gave a brief overview of how security checks worked in gecko and how enablePrivilege fit into that model. Clearly, it was not perfect, otherwise we wouldn’t change anything. In particular, the old
model had the following major problems:

Security checks took place in C++
Because we had security checks scattered throughout our C++ code, the checks ran even when the method was called from other C++ code. This meant that we had cases where an action performed by C++ code looked like it came from whatever was that last JS code that happened to run. This could result in actions being denied when they shouldn’t be. Furthermore, it meant that the security checking code was unsafe by default in that if it was unable to determine the privileges of the running code, it returned maximum permissions.
There were multiple ways of being privileged
Security systems should be as simple as possible and it should be as easy as possible to determine privilege levels. With our previous system, there were the privileges granted by where the code came from (its origin) as well as whether it had happened to call enablePrivilege recently. This combined with the above problem when we wanted to ask, Has the running code called enablePrivilege? we had to deal with the case where the running code was actually C++, further adding to the confusion.
The model punished the common case
Most JS code never touches objects (DOM or not) from another origin. Because our security checks happened for all calls, no matter who was calling (because we didn’t know), even code that we knew would never fail a security check was paying the expensive cost of walking the stack and computing privileges for objects. Our ideal solution would avoid this for common code.
The security checks were dynamic
Once JS code is compiled, its privilege levels are built into it. Furthermore, with a few exceptions, every object in existance has its privileges baked in. The model that we had recomputed both of these privilege levels on every use. We could vastly speed up our security checks by computing the relationship between code and the objects it used when the objects were exposed to it (see also: object capabilities).

The solution

Around the Firefox 4 timeframe, we decided to tackle these problems head-on by moving our security checks into the JS objects themselves. This work started taking place as part of the compartments work and continues today. It is important to note that, the problems in italics above actually caused real security bugs that we had to fix; this wasn’t simply theory. So, as part of the compartments work, we moved our security checks into the JS layer and computed whether they would pass or fail ahead of time. This turned out to be a massive performance win.

At the same time, we started taking advantage of the reduced reliance of our security checks on stack frames. In particular, we found that simply maintaining the JS stack was cost us in performance, so we both wanted to slim down our stack frames as well as avoid pushing them in cases where they weren’t needed.

Back to enablePrivilege

As explained in my last post, enablePrivilege relied heavily on our use of the JS stack. With the compartments work, we no longer needed to walk the stack for our other security checks. But enablePrivilege still required the use of the stack, leading to a situation where we had to re-add the ability to use the stack into our new security model (if I remember correctly, I think our first attempt to allow enablePrivilege to continue working accidentally disabled every single security check). Even worse, we had to continue to maintain state in the stack solely for this case. As our JITs got more complex, this burden got more expensive; the mere existence of the stack walking code has cost us weeks of work. The problems with the JS stack piled up on us after we’d had to fix several earlier security bugs caused by the fact that there were multiple ways of expressing privileged code, meaning enablePrivilege was already high on our list of things to remove from the platform.

The removal of enablePrivilege, therefore, means that we will be able to speed up our JS engine and simplify our security model, while reducing the possibility of us introducing security bugs.

For next time

In the next installment, I’ll dive into some of the less technical reasons that enablePrivilege removal is good. After that, I’ll talk about how to replace enablePrivilege in web applications.

1 comment

Sep 12

Security Checks and enablePrivilege in Gecko, part 1

Category: enableprivilege

Who cares?

The imminent death of enablePrivilege has brought a few angry web application developers into Bugzilla, and they’re quite rightly demanding to know why we’re removing a very powerful tool from their toolbox. While Jonas has responded to them, I thought it might be interesting to expound on the performance aspect of enablePrivilege removal. In order to understand the impact, let’s take a walk down the history of how security checks were implemented in Gecko.

First, bindings

The history of JS security checks in Gecko is very tightly coupled with the history of the bindings that allow JS to C++ communication. The first DOM bindings in Gecko were auto generated from the IDL for each element type by a program that only ran on Windows. The generated bindings would then do the proper forwarding to C++ and automatically convert the result to JS. Because there was no single point of entry going from JS to C++, the code generator had to insert security checks (more on that later) at each entry point. As a side note, looking at the old generated code, we actually forgot to security check accesses to unknown properties (called “expandos”), which would be considered a serious security bug these days. In addition to the security check that verified that a given method call was allowed at all, each DOM method had to check that the operation was permitted on all of the arguments passed to it as well.

In 2001, jst, jband, and peterv landed a massive project called “XPCDOM.” This got rid of the megabytes worth of generated code in favor of using XPConnect’s generic JS to C++ bridge. Because XPConnect has a single point of transit between JS and C++, this allowed us to have a single security check guarding all calls from JS to C++. This single check was, as before, supplemented by additional checks in C++ to verify the validity of the arguments passed in as well as additional checks in the JS engine to catch tricky edge cases that didn’t go directly through C++, but instead stayed in JS (and the engine).

So, what are these “security checks”

I believe that Gecko’s original security model was designed with Java 1’s security model in mind. The main idea of this model is that, at any point in time, it must be possible to inspect the currently running code to ask it what permissions it has and to assign a permission level for every object in the system. In order to perform the former operation, the JS engine exposed a “stack walking” API, which allowed us to write code that walked up the JS stack, asking each stack frame what permissions it had. For the latter operation, we had another (expensive) method of asking every object what permissions it had. Comparing the two permissions gave us our result.

What does this have to do with enablePrivilege?

The semantics of enablePrivilege seen without this context are odd: given a JS stack frame, a call to enablePrivilege elevates the privileges of that stack frame (and any functions it calls) and then returns to normal privileges once that stack frame returns. Knowing how security checks were implemented, however, it makes complete sense. Because the ability to assign privileges to running code depended on the ability to walk the JS stack, annotating stack frames with the information “has called enablePrivilege” and then later (if a security check was about to fail) asking “by the way, do any stack frames have this additional privilege?” was a natural and easy implementation.

Up next: the problems

We’re currently in the process of changing how all of this works, so clearly the solutions presented here were found lacking. So, up next: what’s happening now, and why.

Comments are off for this post

Jun 26

Preparation and Learning French

Category: Uncategorized

Preparation

There are books written on how to learn languages and how to prepare for a trip to France, so I won’t spend a lot of time on my own approach. Having said that, though, there are a few things that are worth mentioning explicitly here. The most important thing, though, is to have fun with it. Learning a language gives you a means to speak to an entirely new world of people and a new way of understanding cultures (both how different and how similar they can be).

My experience with French started when I was two or three years old. My parents (who already had started teaching me a few French words and songs) enrolled me in a class. Unfortunately, we moved when I was still pretty young so I ended up losing a lot of what I had learned. That being said, I think that some of the grammatical constructs and accent stuck with me. Furthermore, throughout my life, my family has used French in certain circumstances. For example, when my mom asks me how much money I have on me (if we’re going into a shop), I respond in French to avoid announcing to the world at large that I have $100 on me.

Comprehension

When learning a new language, there are two separate axes that are useful to think about: comprehension and speaking. Some things, like learning new vocabulary, will improve both at the same time, but I needed to work on them separately. I found that practicing listening to French via podcasts (five to ten hours a week) helped me a lot here, basically any French will do. I found RFI’s journal en francais facile to be very helpful, as well as a fun way to keep up with world events. I also subscribed to a couple of RTL’s podcasts. Once you achieve a basic level, watching movies and TV shows helps to teach you some of the slang (argot) as well as helping you acclimate to how you’re likely to hear French spoken on the street. The French that you hear spoken by radio personalities is very clean, very formal, and very clearly enunciated and it will take a lot of time to get used to the pace and grammatical shortcuts in casual French. If you do watch movies, try to do so without subtitles, it is very easy to read the subtitles and not listen to the French, defeating the purpose. If you’re having too much trouble understanding a movie with subtitles, or you can’t disable them, concentrate on ignoring them as much as possible.

Speaking

There’s a secret method for learning how to speak a new language, but don’t tell anybody: practice! Throughout the two years between my second trip to Paris and my six month voyage I did the following:

  • Read French articles aloud (occasionally recording and replaying them to make sure my accent was somewhat correct).
  • Found French meetup groups, as much as my schedule permitted.
  • Spoke to myself a lot: in the shower, in my car.
  • Maintained an internal French dialog of what was happening in my life and what I was seeing, which is a great way to learn new vocabulary (e.g. when I was watching Harry Potter, I’d say to myself: “ok, maintenant, il utilise son … wand (sa baguette) … pour se defendre!”).

Every little bit helps. Speaking out loud is very important: even if you have the best vocabulary and perfect grammar, you need to get used to saying unusual sounds and unusual combinations of sounds. For some words (like peripherique), I would actually sit down for and, for minutes at a time, say the word over and over until it rolled off my toungue.

Results

Where did all of this work get me? When I got to France, my comprehension had gotten quite good. My French friends would have to take a little care to be sure I could keep up, but not to a point where it was onerous for them. Also, once I got to France and was surrounded by French, my level improved in leaps and bounds. As for fluency, I could speak and get my point across, but with a lot of grammatical mistakes and a lot of hesitation. The cure for this was simply to speak more and to force myself to go faster (more on this later). Overall, I would say that my level had moved from low intermediate to high intermediate.

Once I got to France

It was very important for me that I spoke French in France. This is actually not as obvious as it sounds: a lot of people speak English, and will take the opportunity to practice with you. Don’t let them! It is too easy to sit back and speak English, which won’t help your French at all. If someone responded in English (usually under the guise “Oh, I speak English, it’ll be much easier for you”) I would respond, “Merci, mais je prefere parler en fran├žais” (Thank you, but I prefer to speak French). While it was true that it was easier for me to speak in English, I was willing to struggle as much as necessary. So, don’t take the easy route out!

Final note

I don’t know if I’ve made it clear enough, but the one constant here is that it will take time, effort, and patience. Don’t get discouraged! As I wasn’t taking courses, I didn’t have any tangible evidence of my progress, except that occasionally, I would say a complex sentence without stuttering or mistakes, stop and realize “Wow, I just said that!” which, in a way, is a better yardstick of progress than a simple grade.

3 comments

Jun 25

6 Months in Paris

Category: Uncategorized

What happened?

I’m sitting here in Mozilla’s San Francisco office after having spent 6 months in Paris, working in our office there. While I was in France, I basically lived my life in French and spent as little time as possible speaking in English. It was an amazing time, and I’m sad to be back in California!

As an American in Paris, I noticed a bunch of interesting things about living in France both in terms of having to speak in a second language and differences about our cultures. So, I am going to spend a couple of blog posts talking about my observations and experiences. I was very lucky to have coworkers and friends in Paris who were accepting of my mistakes and willing to let me bumble my way through my trip (and patiently explain when I made mistakes, both culturally and linguistically). While my experience was in France, I think that a lot of what I learned can be applied to other countries that don’t use English as their primary language.

PAQs (Possibly Asked Questions)

Why Paris?

My family has a special relationship with France. My parents both speak French (even though they’re both American) and both lived in Paris for an extended period of time. I have cousins and uncles who go to France regularly. As a result of this, as I grew up, I found myself surrounded to some extent by French and French culture. For example, my parents have been trying to teach me French ever since I was three. On the strength of this familiarity with French, I’ve been to France (and more precisely to Paris) several times, each of them for a month and every time I’ve been there, I’ve found myself saying to myself, “I would really love to live here.” So, when I realized that I might have the opportunity, I jumped at it.

Where did you stay?

Going to a foreign country for six months is a little awkward. It’s just enough time that it might not make sense to keep your apartment in your home city, but not quite long enough that it’s worth the hassle of finding a new one when you get back (especially in San Francisco). Fortunately, I have an “aunt” (in a very loose sense) who avoids winters in Paris by living elsewhere in the world for four months and who doesn’t like to have an empty apartment, so while I was in Paris, I was doing double duty working for Mozilla and housesitting (it’s a hard life, I know!).

2 comments

Sep 23

All Hands

Category: Uncategorized

Last week was the Mozilla Corporation All Hands meeting. This means the entirety of Mozilla (or, at least our employees would could make it) packed into the convention center in San Jose for a week of fun activities and meetings with coworkers from around the globe. We saw neat demos of new technologies and got to apply pressure to people who owed us reviews.

Sitting in the convention center for the first keynote, I couldn’t help thinking back to the first All Hands that I attended. I think it was the second Mozilla Foundation (not yet Corporation) All Hands, the first having been held before I got my internship. I found this snippet from an e-mail about the event:

Date: June 21, 2005
Subject: All Hands Meeting Information – tomorrow at 12 Noon, 650 Castro Street


At 1:00 PM we will start the meeting. Right now there is no formal agenda, but Mitchell will talk and there will be some updates from various members of the organization.

Note that 650 Castro Street didn’t mean the 3rd floor of the building as it does now. Instead, some company had rented us a single room on the ground floor that was being used to hold board meetings. The 25 or so of us sat around in the room to listen to people such as Mitchell talk about the direction of Firefox and the Foundation. Beyond that, I don’t really remember what was discussed in too much detail, but it was a fun time. After the meeting, we all went go karting in Menlo Park (a race that Josh won) before returning to the office the next day to get back to work.

It’s funny to me that the logistics e-mail I found was sent out a day before the event and there was no real set schedule other than, We’re going to talk for a bit. In fact, there weren’t even enough of us that we had to get buses, instead we simply piled into various coworkers cars.

The All Hands meeting last week saw nearly 600 people come in from all around the world. The first planning e-mails went out months in advance and we filled an entire hotel for a full week. Furthermore, the entire first day was filled with keynotes and speakers had a fixed amount of time in which to deliver their messages. Every evening had an activity associated with it, including shuttles to transport everybody to and from the events.

What a change it’s been. I can’t wait to see what next year’s events will hold.

Comments are off for this post

Mar 23

How I got started at Mozilla

Category: Uncategorized

Over in the newsgroups, there’s a raging discussion taking place about how difficult it is for new contributors to contribute to Mozilla. As part of this sort of discussion, people tend to post “my first patch” stories. Rather than reply there, I figured I’d take this chance to update my blog ­čÖé

When I was in high school, I remember hearing about the launch of Netscape 6 and how it was “open source.” Being a budding computer programmer, this was an extremely exciting development. For the first time, I was going to get to see how the “real programmers” did things. So, I downloaded the source code and started looking through it. I’ve always been a “look first, jump later” type person, so even once I’d downloaded the source code and found bugzilla, I lurked. I think I lurked for close to half a year, following bugs that I was interested in by keeping them open and reloading once every few days. Occasionally, if I came across a bug I thought I could fix, I’d write a patch locally, and wait for the assignee to fix it himself and compare our approaches.

The summer after I’d graduated from high school, I was looking through the source code and came across the “htmlparser” top level directory. Now, this was something I could get my brain around. Somehow, I linked that directory to the HTML: Parser component in bugzilla (probably due to a code comment) and started looking through bugs, commenting in a couple of them when I had something useful to say. After a little bit, I found bug 154120, a small bug in “view source.” After having read through the HTML tokenizer code for a while, I’d seen a few things related to “view source”, so I figured out where the bug was and fixed it! I’d been watching other Mozilla developers working in bugzilla and had observed some of the magic incantations (“diff -u” and attaching the result to the bug), but as I was entirely unfamiliar with the process as a whole, I didn’t realize the importance of requesting “review” (as I recall, my hope was that the current assignee of the bug would see the patch and do something useful).

And, nothing happened. I had CC’d myself to the bug and attached the patch, but had no idea of how to advance my patch further. So, I drifted away and went to college for a year and waited until the next summer, entirely forgetting about the patch I had attached. You can imagine my surprise, then, when out of nowhere, I got several e-mails from bugzilla telling me that some guy named Boris Zbarsky had not only seen my patch, but updated it to the current trunk and found it “exactly right.” I still remember the surge of adrenaline kicking in on receiving the e-mail for comment 19 in that bug: “My code is in Mozilla!”

What else could I do? I was instantly hooked on that feeling. Having had success with one view-source bug, I found another one and commented in it. Fortunately, Boris was already CC’d on that one so he could respond and away I went: another (small) bug quickly dispatched, another rush of adrenaline. With Boris to pester on IRC when I had questions and to review my patches (I can only imagine how much I tried his patience, especially in those early days) I was off and running to becoming a developer on Mozilla.

4 comments

Feb 11

xpcnativewrappers=no going away

Category: Uncategorized

Way back in 2005, jst, brendan, and bz combined to implement XPCNativeWrappers (or, as I’ll refer to them, XPCNWs). XPCNWs have the somewhat bizarre behavior that they incompatibly change the view of an object from an extension’s perspective. For example, an extension that grabs a content window’s “window” object and tries to call a function on it that is not part of the DOM, would work before XPCNWs, but not after.

Not having concrete data on how many extensions would be affected by such a change and erring on the side of caution, we implemented a way to opt out of XPCNWs. Basically, if your extension broke because of XPCNWs you could ask Gecko to give you the old, insecure, behavior. The intent at the time was to let authors flip the switch off, then go back to their extension and fix things until they could turn on XPCNW support.

Now, in order to support a more secure and easier to use platform, it is necessary to remove support for xpcnativewrappers=no. This will mean some work on extension authors’ parts:

  • If your extension relies on xpcnativewrappers=no, your extension will stop working correctly when bug 523994 lands.
  • In order to fix it, you should identify the parts of your extension that require direct access to content objects. This should be limited to three cases:
    1. If your extension depends on XBL bindings attached to content objects (namely, being able to call functions or get and set properties created by the XBL binding) then you will need to use the .wrappedJSObject of the XPCNW.
    2. If you need to call functions or access properties defined by the content page (for example, if you wrote an extension to add a delete button to gmail and there’s a window.delete() function defined somewhere).
    3. See The devmo page on XPCNativeWrappers for more.
  • Note that if all you do with content objects is use DOM methods, then everything should simply continue to work (and you shouldn’t be using xpcnativewrappers=no anyway)!

I’ll write a second post soon to describe the what and why of XPCNWs and .wrappedJSObject.

4 comments

Jun 30

Working on the JS engine

Category: Uncategorized

Especially working on old branches without some of the nice debugging helpers that jorendorff has implemented, sometimes I look at my gdb session and just know that I’m working on the JS engine:

(gdb) p $.atom
$11 = (JSAtom *) 0xb194f984
(gdb) p/x *(JSString *)((int)$ & ~7)
$12 = {length = 0x20000004, u = {chars = 0xaf434970, base = 0xaf434970}}
(gdb) x/4ch $.u.chars
0xaf434970:	97 'a'	98 'b'	99 'c'	100 'd'
1 comment

Next Page »