Some aspects of security that have nothing to do with “sandboxing” and “process separation”

January 18th, 2012 by bjacob

I really don’t know much about security, at all. It’s a big field touching almost every aspect of computing, and I only occasionally get some exposure to it, as part of my WebGL work.

But recently, I’ve come across some browser security articles (like this and this) that paint a picture of browser security that can’t even accommodate the few examples I’ve personally had to deal with. Indeed, they tend to reduce browser security to just a couple of aspects:

  • Arbitrary code execution
  • Information leakage across browser tabs

So they proceed to judge browser security based only on a few features revolving around these two aspects, chief among which are sandboxing and process separation.

These aspects of security sure are very important and interesting, but do they really deserve to be glorified as the end-all and be-all of security?

In my limited experience with WebGL, these aspects have indeed sometimes shown up in certain bugs we’ve fixed, like certain crashes involving heap corruption. We took them very seriously and rated them as ‘critical’ because theoretically, well, they are the kind of bugs that can lead to arbitrary code execution. In practice however, we haven’t, as far as I know, seen any of them actually exploited, and for good reason: a majority of them are probably not practically exploitable, all the more since techniques such as ASLR and DEP are used. More importantly, these bugs have been easy to fix, so they just got fixed before they could get widely exploited.

So what I want to talk about here is other categories of bugs I’ve encountered around WebGL, that were not as easy to fix.

Example 1: cross-domain information leakage

There was a flaw in the 1.0.0 version of the WebGL spec, which Firefox 4 followed, that led to a cross-domain information leakage vulnerability. Details are given on that page; let’s just say here that it allowed a malicious script from one domain to read back images from other domains, which is a serious concern; that vulnerability was fixed in Firefox 5, but that was heart-breaking as the fix involved disallowing the use of cross-domain images in WebGL, which broke some legitimate Web pages. A way forward has since been implemented.

There are plenty of examples of cross-domain information leakage vulnerabilities; they are a key part of the Web landscape as they often shape the boundaries of what’s doable and what isn’t (read this). For example, they are the reason why we can’t allow regular Web pages to render other Web pages inside of WebGL scenes, and beyond WebGL, they are now a key technical challenge for CSS Shaders. In addition to shaping new Web specifications, they also make some optimizations unsafe to use in, say, Canvas 2D implementations.

Perhaps it’s worth underlining the fact that information leakage across domains has little to do with information leakage across tabs, which is why process separation is mostly irrelevant here. The above-mentioned cross-domain leakage vulnerability required only one browser tab to exploit. Indeed, the test case had only one canvas; even if some exploit ever used two canvases from two different domains, they could still be put in iframes in a single Web page.

Example 2: browser or driver bugs exposing video memory

We’ve seen (and fixed!) a few bugs whereby it was possible, through WebGL, to get read access to random parts of video memory.

Sometimes it was our fault (like this one): we weren’t correctly programming the graphics system to clear new surfaces, so they still contained contents from earlier usage of that memory area.

Sometimes it was the driver’s fault (like this one and this one) where, despite us correctly programming the graphics system to clear our video memory, it gets it a bit wrong and you end up with your Terminal window painted inside a 3D scene. Regardless, it is the browser’s duty to ensure that such bugs don’t affect the user as a result of browsing. That latter bug is why we blacklisted Mac OS 10.5 for WebGL, but the other one affects newer OSes, so I encourage all users to ensure that they are on the latest stable version of their favorite browser, which has a work-around ;-)

Example 3: client denial-of-service

Denial-of-service vulnerabilities are a very big deal for servers, because for ill-intentioned people, there can be profit in taking down a server in this way. In the case of clients (like Web browsers), the profitability of a denial-of-service (DoS) attack is much more limited, or even inexistant in many cases. We don’t see a lot of Web pages trying to DoS your browser, because all they would gain from it is… that you wouldn’t visit them again.

The existence of DoS vulnerabilities in the Web platform has been a reality forever, and there aren’t great solutions to avoid that. For example, a script can allocate a lot of memory, denying other programs on your computer the “service” of having that memory available to them; and if the browser decided to limit how much memory a script can use, that would certainly collide with legitimate use cases, and there would still be plenty of other DoS vulnerabilities not involving scripts at all. Fun experiment: on a browser that does hardware-accelerated rendering, which will soon be all browsers, try to saturate video memory with a Web page containing lots of large image elements.

WebGL, like every other 3D API since OpenGL 1.1 was released in 1997 with the “vertex arrays” feature, has a specific DoS vulnerability: it allows script to “hog” the GPU, which is particularly annoying as today’s GPUs are not preemptible. Modern OSes have mechanisms to reset the graphics driver when it’s been frozen for a couple seconds, but many drivers still respond poorly to that (crash). It’s sad, but we haven’t seen this hurting many users in the real world, and at least this has led to good conversations with GPU vendors and as a result, things are improving, albeit slowly.


Those are the three worst kinds of WebGL-related vulnerabilities that I’ve personally had to deal with. The security techniques, that some people think are the Apha and the Omega of browser security, are irrelevant to them. I don’t mean that these techniques (sandboxing, process separation…) are useless in general — they are extremely useful in general, but just are useless for the particular kinds of security bugs that have been scariest in my own limited experience. This means that browser security does not boil down to just these techniques, as the security articles, that I linked to at the beginning of this post, would have you believe.

Do users actually get hardware acceleration?

March 28th, 2011 by bjacob

In order to get hardware acceleration, users need to have recent graphics drivers in addition to good hardware. Have they?

To find out, I used a script that parses Firefox 4 crash reports, which now report on attempted/successful/failed graphics features. Below are some findings, based on crash reports collected since the Firefox 4 release. The implicit approximation made here is that crashiness is overall independent from these graphics features, so that numbers of crashes are proportional to numbers of users. This should be a decent approximation, since crash stats show that these graphics features are not among the top causes of crashes.

Layers (compositing, etc.) acceleration

Layers acceleration is what handles compositing, image resizing/rotation, and video colorspace conversions.

Here are success rates per operating system and per graphics system used. Notice that Windows Vista and Windows 7 try both Direct3D 10 and Direct3D 9, so the total success rate there is not just the sum of the two.

OS Direct3D 9 Direct3D 10 OpenGL Total
Windows XP 8 % - - 8 %
Windows 2003 (NT 5.2) 21 % - - 21%
Windows Vista 10% 7% - 17%
Windows 7 9 % 38 % - 44 %
Mac OS X 10.6.3+ - - 99 % 99 %
Linux (disabled by default) - - 79 % 79 %

Our decision to support Direct3D 9, instead of only supporting Direct3D 10, is paying off: it greatly increases the number of users who get hardware acceleration. Even on Direct3D 10-capable Windows versions, having Direct3D 9 as a fallback gives a nice boost to our stats. But the really important part is Windows XP. Those 8 % of XP users might seem few, until one realizes that XP users account of half of the world’s Internet users. 8 % of all XP users would be more than 50 million people. While most (92 %) of XP users have outdated graphics drivers and/or hardware, those 8 % include many people who are tech-conscious enough to have recent graphics drivers installed: it is safe to bet that many of them also care about getting hardware acceleration in their browser. Also, we’ll see below that 21% of Windows XP users actually get WebGL, which means that they have recent enough drivers, so it seems that the reason why only 8 % get accelerated layers is lack of support for features we require there, such as 4Kx4K textures. Then again, we must be very conservative with layers acceleration, as enabling it on too slow graphics systems can ruin the user experience.

The 99% figure on Mac is to be taken with a grain of salt: this is only among users of Mac OS 10.6.3 and newer. Most of hardware acceleration is disabled on older versions. The 79% figure on Linux, too, is a bit tricky: hardware-accelerated layers on Linux are disabled by default, and require manually enabling. Moreover, on Linux, they are correlated with Flash crashes, so this number might be completely biased.


Let’s now turn our attention to WebGL.

OS WebGL success rate
Windows XP 21 %
Windows 2003 38 %
Windows Vista 24 %
Windows 7 58 %
Mac 80 %
Linux 40 %

The 21 % WebGL success rate here on Windows XP is surprisingly higher than the 8 % Direct3D 9 layers success rate. I suppose that the extra requirements we currently place on hardware to enable Direct3D 9 layers, such as the 4Kx4K textures requirement, are ruling out a lot of users.

The 80 % success rate on Mac is really among all Mac users. WebGL is only enabled on Mac OS 10.6, so what this tells is that 80% of Firefox 4 Mac users are on 10.6 and have good hardware.

The 40% success rate on Linux is going to get much higher soon, when we whitelist more recent drivers.

Finally, the Direct2D and DirectWrite success rates are basically the same as the above Direct3D 10 success rates.

Upgrade your graphics drivers!

March 4th, 2011 by bjacob

Firefox 4 brings many new features in the Graphics department, notably hardware acceleration and WebGL. However, when we turned these features on by default in nightly builds around September last year, and then in Beta 7,  crash statistics and bug reports quickly showed that bugs in graphics drivers were often making these features misbehave. We reacted by selectively disabling these new features on buggy drivers, based on the large amounts of information collected by beta testers. Of course, Firefox remains fully functional: only these new features get disabled.

The resulting driver blocklist is worth reading if you want to get the most out of Firefox 4:

Unfortunately, certain computer manufacturers do not allow end users to upgrade drivers on their own. Hopefully these manufacturers will eventually give their users these much needed graphics driver updates.

Some Mercurial Queues tips: hg qcrecord, qfold, and qpush –move

March 2nd, 2011 by bjacob

Recently I’ve come across some good solutions for common tasks with Mercurial Queues (MQ). I’ve updated our MDC page on this subject, and for greater visibility I’d like to copy that stuff here.

Splitting a patch, the easy case: per-file splitting

If you have a patch that modifies file1 and file2, and you want to split it into two patches each modifying only one file, do:

$ hg qgoto my-patch
$ hg qref -X path/to/first/file            # take changes out of current patch and back into `hg diff`
$ hg qnew -f patch-modifying-first-file    # and take that into a new MQ patch

Here, the qref -X command takes the changes to the first file out of the patch, so that they now show up in hg diff and therefore get picked up by the hg qnew -f.

Splitting a patch, the general case, including per-hunk and per-line splitting

If you need to perform finer patch splitting, for example per-hunk or even per-line, there’s a great tool for that: hg qcrecord. It’s provided by the Crecord extension. Follow the instructions on that page to install it. This extension works on your current `hg diff`. So if you had your patch as a MQ patch, you first need to take the changes out of it, using hg qref -X.

$ hg qref -X .             # take changes out of current patch and back into `hg diff`
$ hg qcrecord new-patch 

This will open a console-based dialog allowing you to select file-by-file, hunk-by-hunk, and even line-by-line, what changes you want to record into new-patch. When you first launch hg qcrecord, shows you a list of modified files:

SELECT CHUNKS: (j/k/up/dn/pgup/pgdn) move cursor; (space/A) toggle hunk/all
 (f)old/unfold; (c)ommit applied; (q)uit; (?) help | [X]=hunk applied **=folded
[X]**M hello.cpp

Let’s now press ‘f’ to unfold hello.cpp:

SELECT CHUNKS: (j/k/up/dn/pgup/pgdn) move cursor; (space/A) toggle hunk/all
 (f)old/unfold; (c)ommit applied; (q)uit; (?) help | [X]=hunk applied **=folded
[X]    diff --git a/hello.cpp b/hello.cpp
       2 hunks, 4 lines changed

   [X]     @@ -1,4 +1,5 @@
            #include <iostrea>
      [X]  +#include <cmath>
           #include <cstdlib>

           double square(double x)

   [X]     @@ -8,5 +9,6 @@

            int main()
      [X]  -  std::cout << square(3.2) << std::endl;
      [X]  +  double x = 2.0;
      [X]  +  std::cout << std::sqrt(square(x)) << std::endl;

Folding multiple patches into one

The hg qfold command allows you to merge a patch into another one:

$ hg qgoto my-first-patch      # go to first patch
$ hg qfold my-second-patch     # fold second patch into it

Pushing only one patch, reordering as needed

Mercurial 1.6 introduced the new hg qpush --move command, doing exactly that. This allows to reorder one’s patch queue without resorting to manually editing the series file.