Archive for the ‘Uncategorized’ Category

Mercurial: a script to import a patch queue from another repository

Thursday, September 19th, 2013

Here is a script that makes it less tedious to land patches by e.g. importing one’s mozilla-central patch queue into a mozilla-inbound clone, etc.

Example usage:

  $ cd /hack/mozilla-central
  $ hg qpush patch1
  $ hg qpush patch2
  $ hg qpush patch3
  $ cd ../mozilla-inbound
  $ ./ ../mozilla-central

A customized “planet Mozilla” with a focus on Gecko development

Monday, December 10th, 2012

Planet Mozilla doesn’t do what I personally need. Planet Mozilla Projects gets closer, but misses the mark: still far more stuff than I can read, and misses some very useful one-person blogs.

So here’s a Google Reader that does what I want. It currently aggregates 40 Mozilla-related feeds, some of them from Planet Mozilla, some others from Planet Mozilla Projects. It’s unabashedly biased toward my personal interests. You might find it useful too. Suggestions for additional feeds to add are welcome.

Here’s a rough description of what to expect there:

  • Over 50% Gecko-related posts.
  • Some hacking tips.
  • Some other Mozilla technical things.
  • Some other Mozilla non-technical things to stay connected with the wider community, like the excellent Bonjour Mozilla.
  • As little personal/ego blogging as possible — hard to make guarantees as some otherwise very interesting feeds can occasionally digress a bit. That’s a tough call, as I have to refrain from adding some otherwise very useful feeds to avoid diluting the blogroll too much with off-topic posts.
  • No political/philosophical/religious blogging at all. I promise I’ll unconditionally remove any feed I notice doing that. (Oops: now I expect that the Planet Mozilla audience will flame me, pointing out that this post of mine is precisely doing politics. Well, yes, but not that kind of politics.)


B2G debugging with GDB really works. No excuses for printf debugging!

Friday, November 9th, 2012

Apparently many people still believe that GDB debugging on B2G doesn’t really work. I know about that: I held that wrong opinion until yesterday.

Turns out that my issues were all caused by simple mistakes.

Of all the standard features of a debugger that I have tried, only watchpoints don’t currently work. Everything else works, including setting a breakpoint on a symbol and getting it hit.

So I expanded a little the MDN page on this topic, with a summary of what works and some troubleshooting information.

So if you were having issues with debugging B2G, check if that helps you. If not, please improve this page!


Extracting useful data from crash reports

Thursday, August 2nd, 2012

For a while, I’ve been extracting data about Firefox’s graphics features from crash reports. I’ve recently expanded and updated the results, which you can see here:

In particular, besides what users of this page already know, new questions answered include:

Extracting this kind of data from crash reports is very easy. Here’s a nano-tutorial.

The public crash report data is available there:

You want the -pub-crashdata files. There is one per day. Download one of them, for example:

I suggest keeping this file compressed on disk and only decompressing on-the-fly, as shown below.

Each line in this file represents one crash reports. For example, to know how many crash reports there were on 20120801,

$ zcat 20120801-pub-crashdata.csv.gz | wc -l

To know how many of them were Windows XP SP2 users,

$ zcat 20120801-pub-crashdata.csv.gz | grep Windows.NT.5.1 | grep Service.Pack.2 | wc -l

Note that I’m using the dot to match any character, including actual dots as in “5.1″, as it doesn’t make a difference here and I’m too lazy to properly escape dots.

Just look for yourself at the first few lines of a -pub-crashdata file to see what data is in there. In particular, you get the crash report’s AppNotes where Gecko code can write custom annotations: that is how we get to know how many users have WebGL or Layers Acceleration working, for example. You also get the crash signature, so you could plot how crashy a symbol has been over time. You also get CPU info, while GPU info is typically found in the AppNotes. And you also get the HTTP link to the full crash report, which is easy extract with the cut command, so you could make tools giving you right away the crash links that are relevant to your interests.

The data in my graphs was extracted by a C++ program itself run by a BASH script.


E-voting in French election requires out-of-date Java plugin, blocked by Firefox

Thursday, May 24th, 2012

France is trying e-voting for the first time in the upcoming legislative elections, for French voters residing outside of the country (one million voters), and it’s defective to an amazing extent — even by e-voting standards.

It requires the Java plug-in. Not only that, but it doesn’t even work with the latest version 1.7 of it, and requires the outdated version 1.6, which of course is blocked by Firefox for security reasons.

As a result, the French government (still same link) is going as far as asking voters to use another browser!

Only the Oracle version of Java is supported. OpenJDK is explicitly unsupported.

Update: It seems that Firefox doesn’t block the newest revisions of Java 1.6 (only 1.6.30 and below are blocked). Assuming that’s correct, the French government’s message asking users to switch to a different browser is unfounded.

WebGL 1.0.1 conformance testing, part 2

Saturday, April 21st, 2012

Last week, I asked people to participate in WebGL 1.0.1 conformance testing. The response has been amazing and I want to thank everyone for that.

The results can be seen in the archives of the dedicated Google Group. Thanks to these, we have identified and fixed a couple of bugs in Gecko, we have landed a couple of work-arounds for widespread driver issues, and we have even found and corrected an issue in a WebGL conformance test that was non-passable on color-managed systems!

So now is a great time to start a second round of mass testing.

Please follow the instructions on this page!

Firefox users: please use a Nightly build from today (2012-04-21) or newer. Results should be very good. Also, at this point, any test failure is very likely to be a driver bug worth reporting to your driver vendor.

While I’m only speaking with my Mozilla hat here, we are interested in results from all WebGL-capable browsers on desktop platforms. These include Chrome, Firefox, Opera and Safari, on Windows, Mac OSX and desktop Linux.

Your help needed: run WebGL 1.0.1 tests in today’s Nightly build

Tuesday, April 17th, 2012

In the WebGL WG, we’re currently asking people to run WebGL 1.0.1 conformance tests in Nightly builds of their favorite browsers, using recent graphics drivers, to see the actual status of passing conformance tests on real drivers.

As far as Firefox is concerned, some important fixes just arrived in today’s (20120417) Nightly builds, and so, if you have recent graphics drivers, I would like very much you to get this build or upgrade to it, and follow these instructions.

We’re interested in this on the 3 main desktop operating systems: Windows, Mac OSX, Linux. And of course, it is also very interesting if you can test that on other browsers, such as: Chrome 20 or Canary, Opera 12, Safari WebKit Nightly builds.

Many thanks!

Update: Huge thanks to everyone who contributed. I have enough data for rigth now, but as fixes and workaround keep landing, it’s always useful to have more people testing in the future. Other browser vendors may also be interested in more testing. I will probably ask for more Firefox testing in a few days once some more workarounds have landed.

Update 2: Time for a second round of testing! See my new post.

Introduction to WebGL (FITC talk slides)

Monday, March 26th, 2012

Here are the slides (rather a plain HTML page) of my “Introduction to WebGL” talk at FITC Spotlight Javascript, just in case that might be useful. After a quick general introduction, I focused on making sure that people at least grasp the basic concepts, especially around shaders. The idea is that once they understand that, they can easily learn the rest by themselves using existing excellent tutorials.

By the way, some notes on “slides”:

  • I don’t understand why people stick to the “slide projector” format. I find that just continuously scrolling through a plain HTML page is much better for many reasons. Most importantly, it allows to zoom in/out as needed (to adjust to the realities of conference rooms), which is not practical once the document is formatted into “slides”. It also allows to include longer code snippets or diagrams without having to worry about slide boundaries, in situations where having to scroll a bit is acceptable.
  • Another thing I really don’t understand is why most slide templates use dark grey text on light grey background. When using a video projector, maximizing contrast should be a higher priority than being cool with colors.
  • Finally, I don’t understand the value of adding pauses to slides (i.e. only showing part of a slide at first, then showing the rest). It probably keeps people entertained, but I don’t understand how it can help people understand things better.

Blogs are the worst medium for a debate

Thursday, March 8th, 2012

Why is it that when people debate using blogs, this almost inevitably degenerates and causes negative feelings?

Here’s an attempt at a theory to explain that. When X and Y are debating, it should be X talking to Y and Y talking to X. Trivial, no?

But blogs break this trivial requirement. When X blogs about what Y wrote, it’s not X talking to Y. Instead, it’s X talking to The World about Y. The result is twofold:

  1. Makes Y feel publicly attacked
  2. Invites The World to the debate, thus feeding the debate with fresh new people who are not yet tired of it, and who may be missing earlier parts of the debate, since it’s not easy to trace back a debate-by-blogs to its origin.


Some aspects of security that have nothing to do with “sandboxing” and “process separation”

Wednesday, January 18th, 2012

I really don’t know much about security, at all. It’s a big field touching almost every aspect of computing, and I only occasionally get some exposure to it, as part of my WebGL work.

But recently, I’ve come across some browser security articles (like this and this) that paint a picture of browser security that can’t even accommodate the few examples I’ve personally had to deal with. Indeed, they tend to reduce browser security to just a couple of aspects:

  • Arbitrary code execution
  • Information leakage across browser tabs

So they proceed to judge browser security based only on a few features revolving around these two aspects, chief among which are sandboxing and process separation.

These aspects of security sure are very important and interesting, but do they really deserve to be glorified as the end-all and be-all of security?

In my limited experience with WebGL, these aspects have indeed sometimes shown up in certain bugs we’ve fixed, like certain crashes involving heap corruption. We took them very seriously and rated them as ‘critical’ because theoretically, well, they are the kind of bugs that can lead to arbitrary code execution. In practice however, we haven’t, as far as I know, seen any of them actually exploited, and for good reason: a majority of them are probably not practically exploitable, all the more since techniques such as ASLR and DEP are used. More importantly, these bugs have been easy to fix, so they just got fixed before they could get widely exploited.

So what I want to talk about here is other categories of bugs I’ve encountered around WebGL, that were not as easy to fix.

Example 1: cross-domain information leakage

There was a flaw in the 1.0.0 version of the WebGL spec, which Firefox 4 followed, that led to a cross-domain information leakage vulnerability. Details are given on that page; let’s just say here that it allowed a malicious script from one domain to read back images from other domains, which is a serious concern; that vulnerability was fixed in Firefox 5, but that was heart-breaking as the fix involved disallowing the use of cross-domain images in WebGL, which broke some legitimate Web pages. A way forward has since been implemented.

There are plenty of examples of cross-domain information leakage vulnerabilities; they are a key part of the Web landscape as they often shape the boundaries of what’s doable and what isn’t (read this). For example, they are the reason why we can’t allow regular Web pages to render other Web pages inside of WebGL scenes, and beyond WebGL, they are now a key technical challenge for CSS Shaders. In addition to shaping new Web specifications, they also make some optimizations unsafe to use in, say, Canvas 2D implementations.

Perhaps it’s worth underlining the fact that information leakage across domains has little to do with information leakage across tabs, which is why process separation is mostly irrelevant here. The above-mentioned cross-domain leakage vulnerability required only one browser tab to exploit. Indeed, the test case had only one canvas; even if some exploit ever used two canvases from two different domains, they could still be put in iframes in a single Web page.

Example 2: browser or driver bugs exposing video memory

We’ve seen (and fixed!) a few bugs whereby it was possible, through WebGL, to get read access to random parts of video memory.

Sometimes it was our fault (like this one): we weren’t correctly programming the graphics system to clear new surfaces, so they still contained contents from earlier usage of that memory area.

Sometimes it was the driver’s fault (like this one and this one) where, despite us correctly programming the graphics system to clear our video memory, it gets it a bit wrong and you end up with your Terminal window painted inside a 3D scene. Regardless, it is the browser’s duty to ensure that such bugs don’t affect the user as a result of browsing. That latter bug is why we blacklisted Mac OS 10.5 for WebGL, but the other one affects newer OSes, so I encourage all users to ensure that they are on the latest stable version of their favorite browser, which has a work-around ;-)

Example 3: client denial-of-service

Denial-of-service vulnerabilities are a very big deal for servers, because for ill-intentioned people, there can be profit in taking down a server in this way. In the case of clients (like Web browsers), the profitability of a denial-of-service (DoS) attack is much more limited, or even inexistant in many cases. We don’t see a lot of Web pages trying to DoS your browser, because all they would gain from it is… that you wouldn’t visit them again.

The existence of DoS vulnerabilities in the Web platform has been a reality forever, and there aren’t great solutions to avoid that. For example, a script can allocate a lot of memory, denying other programs on your computer the “service” of having that memory available to them; and if the browser decided to limit how much memory a script can use, that would certainly collide with legitimate use cases, and there would still be plenty of other DoS vulnerabilities not involving scripts at all. Fun experiment: on a browser that does hardware-accelerated rendering, which will soon be all browsers, try to saturate video memory with a Web page containing lots of large image elements.

WebGL, like every other 3D API since OpenGL 1.1 was released in 1997 with the “vertex arrays” feature, has a specific DoS vulnerability: it allows script to “hog” the GPU, which is particularly annoying as today’s GPUs are not preemptible. Modern OSes have mechanisms to reset the graphics driver when it’s been frozen for a couple seconds, but many drivers still respond poorly to that (crash). It’s sad, but we haven’t seen this hurting many users in the real world, and at least this has led to good conversations with GPU vendors and as a result, things are improving, albeit slowly.


Those are the three worst kinds of WebGL-related vulnerabilities that I’ve personally had to deal with. The security techniques, that some people think are the Apha and the Omega of browser security, are irrelevant to them. I don’t mean that these techniques (sandboxing, process separation…) are useless in general — they are extremely useful in general, but just are useless for the particular kinds of security bugs that have been scariest in my own limited experience. This means that browser security does not boil down to just these techniques, as the security articles, that I linked to at the beginning of this post, would have you believe.