I really don’t know much about security, at all. It’s a big field touching almost every aspect of computing, and I only occasionally get some exposure to it, as part of my WebGL work.
But recently, I’ve come across some browser security articles (like this and this) that paint a picture of browser security that can’t even accommodate the few examples I’ve personally had to deal with. Indeed, they tend to reduce browser security to just a couple of aspects:
- Arbitrary code execution
- Information leakage across browser tabs
So they proceed to judge browser security based only on a few features revolving around these two aspects, chief among which are sandboxing and process separation.
These aspects of security sure are very important and interesting, but do they really deserve to be glorified as the end-all and be-all of security?
In my limited experience with WebGL, these aspects have indeed sometimes shown up in certain bugs we’ve fixed, like certain crashes involving heap corruption. We took them very seriously and rated them as ‘critical’ because theoretically, well, they are the kind of bugs that can lead to arbitrary code execution. In practice however, we haven’t, as far as I know, seen any of them actually exploited, and for good reason: a majority of them are probably not practically exploitable, all the more since techniques such as ASLR and DEP are used. More importantly, these bugs have been easy to fix, so they just got fixed before they could get widely exploited.
So what I want to talk about here is other categories of bugs I’ve encountered around WebGL, that were not as easy to fix.
Example 1: cross-domain information leakage
There was a flaw in the 1.0.0 version of the WebGL spec, which Firefox 4 followed, that led to a cross-domain information leakage vulnerability. Details are given on that page; let’s just say here that it allowed a malicious script from one domain to read back images from other domains, which is a serious concern; that vulnerability was fixed in Firefox 5, but that was heart-breaking as the fix involved disallowing the use of cross-domain images in WebGL, which broke some legitimate Web pages. A way forward has since been implemented.
There are plenty of examples of cross-domain information leakage vulnerabilities; they are a key part of the Web landscape as they often shape the boundaries of what’s doable and what isn’t (read this). For example, they are the reason why we can’t allow regular Web pages to render other Web pages inside of WebGL scenes, and beyond WebGL, they are now a key technical challenge for CSS Shaders. In addition to shaping new Web specifications, they also make some optimizations unsafe to use in, say, Canvas 2D implementations.
Perhaps it’s worth underlining the fact that information leakage across domains has little to do with information leakage across tabs, which is why process separation is mostly irrelevant here. The above-mentioned cross-domain leakage vulnerability required only one browser tab to exploit. Indeed, the test case had only one canvas; even if some exploit ever used two canvases from two different domains, they could still be put in iframes in a single Web page.
Example 2: browser or driver bugs exposing video memory
We’ve seen (and fixed!) a few bugs whereby it was possible, through WebGL, to get read access to random parts of video memory.
Sometimes it was our fault (like this one): we weren’t correctly programming the graphics system to clear new surfaces, so they still contained contents from earlier usage of that memory area.
Sometimes it was the driver’s fault (like this one and this one) where, despite us correctly programming the graphics system to clear our video memory, it gets it a bit wrong and you end up with your Terminal window painted inside a 3D scene. Regardless, it is the browser’s duty to ensure that such bugs don’t affect the user as a result of browsing. That latter bug is why we blacklisted Mac OS 10.5 for WebGL, but the other one affects newer OSes, so I encourage all users to ensure that they are on the latest stable version of their favorite browser, which has a work-around
Example 3: client denial-of-service
Denial-of-service vulnerabilities are a very big deal for servers, because for ill-intentioned people, there can be profit in taking down a server in this way. In the case of clients (like Web browsers), the profitability of a denial-of-service (DoS) attack is much more limited, or even inexistant in many cases. We don’t see a lot of Web pages trying to DoS your browser, because all they would gain from it is… that you wouldn’t visit them again.
The existence of DoS vulnerabilities in the Web platform has been a reality forever, and there aren’t great solutions to avoid that. For example, a script can allocate a lot of memory, denying other programs on your computer the “service” of having that memory available to them; and if the browser decided to limit how much memory a script can use, that would certainly collide with legitimate use cases, and there would still be plenty of other DoS vulnerabilities not involving scripts at all. Fun experiment: on a browser that does hardware-accelerated rendering, which will soon be all browsers, try to saturate video memory with a Web page containing lots of large image elements.
WebGL, like every other 3D API since OpenGL 1.1 was released in 1997 with the “vertex arrays” feature, has a specific DoS vulnerability: it allows script to “hog” the GPU, which is particularly annoying as today’s GPUs are not preemptible. Modern OSes have mechanisms to reset the graphics driver when it’s been frozen for a couple seconds, but many drivers still respond poorly to that (crash). It’s sad, but we haven’t seen this hurting many users in the real world, and at least this has led to good conversations with GPU vendors and as a result, things are improving, albeit slowly.
Those are the three worst kinds of WebGL-related vulnerabilities that I’ve personally had to deal with. The security techniques, that some people think are the Apha and the Omega of browser security, are irrelevant to them. I don’t mean that these techniques (sandboxing, process separation…) are useless in general — they are extremely useful in general, but just are useless for the particular kinds of security bugs that have been scariest in my own limited experience. This means that browser security does not boil down to just these techniques, as the security articles, that I linked to at the beginning of this post, would have you believe.