Feb 13

How Often Do Firefox Users Change Their Default Preferences ?

Just a quick post to highlight some very interesting work by my colleague, Monica Chew.

Monica started out by asking the question: “How often do Firefox users change their default preferences?”. Security and privacy are her focus at Mozilla, so she collected data to see what percentage of users change various security and privacy preferences and see what insights could be gleaned.

More details on her blog here : http://monica-at-mozilla.blogspot.com/2013/02/writing-for-98.html

Nov 12

OWASP AppSecUSA 2012

I recently attended my first OWASP conference, AppSec USA, which took place at the end of October in Austin, Texas.

This year I’ve been trying to attend conferences besides the small set I’ve attended in the past. One thing that attracted me to the AppSec USA conference was OWASP’s focus on building secure software and websites, not just on breaking software or finding new vulnerabilities.

In Austin, I spoke to several attendees who were extremely interested in the security features we are working on for Firefox and had some especially good discussions about Content Security Policy (aka CSP). CSP was mentioned frequently over the course of the conference presentations and it was also listed as a ‘top 10 web defense’. I spoke to folks from several major sites who were either testing CSP or planning to start rolling it out over the next year. This is particularly personally relevant as I’ve been working on making Firefox CSP 1.0 compliant [1]. It was extremely encouraging and motivating to hear so much advocacy for CSP, it seems like it’s really starting to gain momentum on the web. I also heard two other neat uses for CSP : detecting mixed content on one of your site’s pages and also detecting infected browsers via catching their requests to malicious sites. CSP’s ability to specify a report-only policy so sites can try out a policy and evaluate what violations occur without the risk of breakage seems like a particularly favorite feature. Additionally, I told folks interested in CSP about the User CSP add-on (written by Kailas Patel as his Google Summer of Code project), which allows a user to apply a custom CSP to a site or auto-generate an initial CSP. There are many similar projects under development to help generate a policy for a site.

Mozilla’s CTO Brendan Eich gave an interesting and well received talk on how we ended up with the same origin model of the web today and future developments towards more secure JavaScript. Yvan Boily from the Security Assurance team also spoke on how Mozilla delivers security at scale, involving both community participation and developing open source automated security tools. Michael Coates, Mozilla’s Director of Security Assurance, was part of a panel on bug bounties along with speakers from Etsy, Facebook and Google. The bug bounties panel was popular with attendees and afterwards quite a few people said they would look into starting their own bug bounty programs based on the experiences shared by the panelists.

I also attended quite a few sessions about issues with SSL/the current CA system. It was great to hear folks pushing HSTS (By the way, Firefox now has an HSTS preload list, thanks to David Keeler. See his blog post for more details.) and CA Pinning, which Camilo Viecco is currently working on.

Overall, I really enjoyed being able to talk to other folks on the defense side of security. Additionally, I feel like I saw and heard a lot to support our Security Engineering roadmap – it seems to be really well lined up with the mechanisms folks who protect sites are looking for browsers to provide. I really hope to be able to attend the next AppSecUSA in NYC in 2013 to continue the dialogue and maybe get some ideas for some new security features !

(PS: speaking of CSP and security mechanisms, I’ll also take this chance to plug the work Isaac Dawson recently has done to look at which security headers are used by the Alexa Top 1,000,000 Sites, this is some fascinating research !)

[1] For details, see bug 783049 and bug 746978

Jul 12

The HackWEEKDAY Contest at HITB Amsterdam 2012

At the end of last May, Mozilla sponsored the HackWEEKDAY contest at the third annual Hack In the Box conference in the Netherlands. The contest ran alongside the rest of the HITB conference which featured
presentations on security topics including new iPhone jailbreaks and a second day key note from Bruce Schneier.

Lucas Adamski and myself travelled from San Francisco to Amsterdam where we met up with Christian Holler and Frederik Braun, a former Mozilla security intern. Both Christian and Freddy are based in Germany, so it was good to be able to and catch up with them in Europe and hear about Christian’s current projects and Freddy’s Masters degree research on web browser security.

Mozilla and HITB folks

Christian, Youri, Dirk, Freddy and Lucas

The HackWEEKDAY contest goal was pretty simple : write an Firefox add-on related to security. The contest had previously been run at an earlier HITB conference in Malaysia, where Gary Kwong was Mozilla’s representative. I made sure to talk to Gary before leaving for the conference and got some great insights into what to expect. Additionally, I took his suggestion to bring along lots of Firefox swag! The prize to be awarded for creating the best add-on, judged by a panel of Mozilla and HITB representatives, was 1337 euros.

HITB’s Youri van de Zwart and Dirk Van Veen did an excellent job preparing for the contest. The contestants had a great space in which to hack, their own wireless network separate from the main conference, and a SVN server for their code.

The HackWEEKDAY space

The contest took place over two days, with a six hour hacking session each day. The Mozilla representatives (including myself) attended to help brainstorm project ideas and to aid with add on development. For many of the contestants, HackWEEKDAY was their first exposure to writing a Firefox add-on.

Hackers hacking on add-ons

The participants ended up forming into four teams and working on four different Firefox addons :

* Sernin van de Krol and Sander Kerkdijk created an addon to prevent password reuse. It alerts when a user creates an account using a password that they have previously already used and saved in Firefox’s password manager. It also highlights when a user attempts to reuse a password on a site with an HTTPS login page that has already been saved for a site with an HTTP login page, since that password could have easily been passively intercepted.

Sernin and Sander present their add-on

* Vianney Darmaillacq and Klaus de Graaf built an addon to integrate GPG with Firefox to make it easier for users to encrypt and decrypt text in the browser. The user can select a piece of text in Firefox, right-click and choose to encrypt. The data is passed to GPG via running it in a shell and then the encrypted text is copied to the clipboard for ease of pasting into an email or something similar. Similarly, the user can highlight encrypted text, choose “decrypt” and the plain text will be pasted to the clipboard after GPG has decrypted it.

Vianney and Klaus present their add-on

* Paul Hooijenga also focused on password reuse. His addon took a different approach to the other project : it checks currently saved passwords to see if there are duplicates and alerts if so. Additionally, it also uses a publically available blacklist of sites that store passwords unencrypted and warns if a password used on one of those sites is reused elsewhere.

Paul presents his add-on

* Pieter Vlasblom and Erik Kooistra made an addon to automatically verify hashes of files that the user downloads. When a file is being downloaded, the addon looks for a file in the same directory on the server with the same name and an .md5 or .sha1 extension. It then tries to download those hashes, preferring SSL and falling back to HTTP if necessary. The SHA1 hash is preferred to the MD5 hash if both exist. A message to the user is displayed unobtrusively if the verification succeeded, but if the hash exists and does not match the computed hash for the download file, the file is deleted and a warning displayed.

Pieter and Erik present their add-on

After some serious debate, it was decided that the winner was Pieter and Erik’s hash checking addon. The panel discussed the fundamental bootstrapping problem with checking hashes during the judging – eg. if the hash is downloaded over HTTP, it can’t be trusted. Likewise, if the page containing the links to the download and its hash are accessed over HTTP, those links can’t be trusted, even if they are HTTPS links, and so on and so on. Since the addon was something most users could immediately use, adding security transparently without requiring users to make a security decision, the panel still felt this was the best project, although all of the addons submitted were quite good and had their advocates. The winning team was presented with their prize by Lucas and Dirk during the closing ceremonies of the conference.

Pieter and Erick collect their prize from Lucas and Dirk

To close on a more personal note, I really enjoyed the whole experience – it was particularly interesting to me to finally attend a security conference in Europe and to be able to compare and contrast the experience against the many conferences I have attended in North America. I also got a chance to do a brief demo of B2G for the contestants, Christian spoke about his adbfuzz framework and gave a brief demo, and Lucas chatted with many folks interested in what Mozilla is doing at the moment, both related and unrelated to security. The conference and contest was a great opportunity for us to connect with the larger security community in person, and I’m very thankful I got to be a part of it !

May 12

One Year At Mozilla : Working on the Security Engineering Team

A few months ago, I decided I would write a blog post after working at Mozilla for one year. I wasn’t sure what it would be about, but I knew that I wanted to talk about the sort of things I’ve been working on, my team and our projects.

I joined Mozilla in May 2011 to work on client security, mostly focusing on desktop and mobile Firefox. I’ve worked in various security roles throughout my career, but personally feel that an engineering role is where I am the happiest and have the most to offer. Although I’ve been interested in security since I was a teenager exploring the Internet, development is where my professional career began. I worked on Solaris at my first job, and subsequently began a long stint as a Windows programmer at a number of companies. At one of these companies, I even worked on software for Windows CE, helping build one of the first-generation mobile firewalls. I became more and more interested in focusing on security professionally and I found myself oscillating between ‘security consulting’-type work and ‘security engineering’-type work. Both have different challenges and their own set of pros and cons. As brilliantly described by David Mandelin in a blog post about starting employment at Mozilla, the hardest problem I had to face on my arrival was figuring out where to focus my efforts to most improve our users’ security and privacy. Among other things, I worked reviewing Firefox’s silent updates feature, reviewing the Android Sync client side implementation, and tracking and advising on security matters for Firefox on Android. I also participated in many feature security reviews, worked on a few random security bugs (which helped a lot with learning my way around the Mozilla codebase and development environment), and finally began work on a Firefox feature I hope to discuss further in a future blog post once it’s landed !

In February 2012, Mozilla’s security teams reorganized. Previously, the security teams had been organized into separate groups responsible for operational security (securing, hardening, and monitoring servers and devices throughout Mozilla), infrastructure security (web applications and website security) and client security (Firefox for desktop and mobile and other client side applications – including fuzzers looking for security bugs in these products). The reorganization merged all teams responsible for reviewing and maintaining security across the organization into the Security Assurance team and established the Security Engineering team to focus on implementing security and privacy features. Security Engineering works on these features both in conjunction with other browsers and websites to improve the security and privacy of the web as a whole, and to attempt to advance the state of the art by going where other browsers cannot or will not go, in alignment with Mozilla’s manifesto of putting users first. After the reorganization, I am now a member of the Security Engineering team – it’s been a very busy first few months for us! Of course, in classic Mozilla style, teams have pretty fuzzy boundaries, so I still work with the same people every day on our shared goal of improving security.

One thing that is incredibly refreshing about working on security at Mozilla, as many of my colleagues will attest, is being able to talk about our plans and what we are working on – this is very unusual for many security folks! Security engineering follows the same open and public approach to developing features the rest of the Mozilla project takes. This means that we open ourselves up to getting advice and input on our security work from the rest of the community, which is especially valuable early on in a project’s lifecycle. By “community” I mean both the security community (just like in cryptography, the more folks who try to break your system the better!) and the Mozilla community, where we can gain insight into the security issues and challenges our users are facing. We can ship early versions of features disabled by default (like click to play) in Nightly builds, and people can turn the feature on, try it out, and provide feedback, ensuring we’re not planning the feature in a vaccuum.

At the first few meetings of the Security Engineering team, we focused on updating the existing Security Roadmap. The roadmap began as a survey conducted by Lucas Adamski in early 2011. Respondents essentially played a variant of Planning Poker, in which everyone was given a set number of points to allot to various ideas for security features or research that had been generated in a previous brainstorming session. This helped gauge the feelings of people interested in security within the Mozilla project on what was most important to focus on security-wise. Sid Stamm, Mozilla’s Lead Privacy Engineer, established a Privacy Roadmap along similar lines last year, focusing on improving the privacy of our users on the web.

The key to both of the Security and Privacy roadmaps is that they are discussed and refined on a regular basis. This agility allows us to keep the roadmaps responsive to current events on the Internet and also helps make sure we are in line with the overall priorities of the Mozilla project. It lets us set long term goals and make sure we are focusing on the right things. We tend to keep projects that aren’t actively being worked on at a lower priority, ensuring we finish the higher priority projects first or have actively decided to change an project’s priority. It’s no surprise that many of the Security Engineering team’s quarterly goals are pretty much lifted from our roadmap.

Personally, I’m currently working on two P1 items on the Security Roadmap:

  • <iframe> sandbox – an attribute that can restrict a piece of content is something I became interested in when I was previously working on Flash Player. My first week at Mozilla <iframe> sandbox was suggested to me by Jonas Sicking as a potential project to pick up I’ve completed writing all the code and tests for this, and it’s going through code and security review now. I hope to land it sometime in the near future.
  • My main project at the moment is researching sandboxing the Firefox process on Windows, in a similar fashion to how IE’s and Chrome’s processes are sandboxed to try to thwart successful exploitation. I–and other Mozilla security folks–have frequently heard from people in the security community that this is one of the main things they would like us to do to improve the security of Firefox; however it’s a pretty complicated project. At the moment I’m working on a proof-of-concept that would show that it’s possible to at least sandbox some parts of Firefox, aided by our intern, Marshall Moutenot. I plan to talk more about this project in the near future in a blog post as the work progresses.

Another P1 roadmap item, the Security Engineering team has been spending a fair bit of time discussing and pushing along is “opt in activation of plugins”, often referred to as “click-to-play for plugins”. Our aim for this feature from a security-focused perspective is to help drive users to update their plugins and prevent ‘drive by’ attacks attempting to exploit plugin vulnerabilities.

The Security Engineering team is hard at work on many other interesting projects as well :

  • Sid Stamm is researching giving users better control over their cookies, especially third-party cookies
  • Tanvi Vyas is working to make it easier for users to see when their passwords are being submitted insecurely and exploring creating a workable user experience around this
  • Camilo Viecco dug into some bugs to make it harder for sites to fingerprint and identify users on the web and is now working on implementing CA pinning in Firefox
  • Lucas Adamski (our fearless leader) has been knee deep in Open Web Apps and B2G for the past few months, driving development of the security model and shaping our story around permissions for WebAPI’s and Open Web Apps
  • David Keeler, after two previous stints as a Mozilla intern, joined Security Engineering full-time recently and is doing an awesome job, diving right into working on click-to-play for plugins
  • David Dahl also recently joined the Security Engineering team. David was previously working on the Firefox team and is working on DOMCrypt , both the implementation and efforts to get it spec’d and standardized through the W3C process. He’s also been helping with the ‘Sign Into The Browser’ feature.

All this exciting stuff is in addition to our attempts to keep up on everything going on with security and privacy on the web (and as part of the Mozilla project) and discussing security and privacy with community folks whenever we can!

In closing, if you’re interested in Mozilla’s Security Engineering efforts, our meetings are on Thursdays at 3pm PST. Meeting announcements are posted to mozilla.dev.planning and mozilla.dev.security every week.

We also warmly welcome feedback and discussion on security features or the Security/Privacy roadmaps on the mozilla.dev.security mailing list/group or on #security on irc.mozilla.org

And.. if this all sounds challenging and really interesting (and often fun!), we’re hiring security and privacy engineers !

Thanks to Lucas Adamski, Sid Stamm, and Tanvi Vyas for reviewing and providing feedback for this post !

Feb 12

INFILTRATE 2012 Conference Recap

Recently I was lucky enough to be sent by Mozilla to Miami to attend the INFILTRATE 2012 Security Conference. The conference is in its second year and took place January 12th and 13th at the Gansevoort Hotel in Miami Beach. It was a single track event (which I greatly prefer) focused on offensive security (ie. attacking rather than defending) and had somewhere around 150 attendees. This year I’m attempting to go to smaller conferences, particularly ones I haven’t attended previously, and the lineup looked great, so off I went !

Here’s my attempted recap of the two days of presentations :

Day 1

Keynote – Thomas Lim (organizer of SyScan conference, COSEInc)

Thomas Lim’s first day keynote was incredibly entertaining. The two main themes of his presentation were security conferences and cyberwar. His opinion on conferences is that there are just too many, too many repeat presentations and not enough good speakers to go around. These factors bring down the quality of all conferences. As someone who has put on many conferences in Asia, Thomas pointed out that getting into throwing security conferences to make money is foolish, as most of them don’t. His list of hard-learned mistakes is : thinking a good speaker program is enough to make a successful conference, undercharging for admission (you need to cover your costs !) and accepting great talk submissions that ended up being given by terrible speakers. (At INFILTRATE, the organizers had the speakers give the presentation to them, providing a lot of feedback and suggestions – it showed !) On cyberwar, Thomas presented his personal opinions of what’s really going on currently and predictions of what the future holds. This talk was laced with humour throughout and set a really good tone for the rest of the conference.

Blackberry Playbook – Ben Nell & Zach Lanier (Intrepidus Group)

This talk shared the results of researching the security configuration and design choices of the Blackberry Playbook. The architecture of the Playbook and its security mechanisms were discussed, as well as ways to circumvent them, such as a recently published jailbreak. Installing updates for the Playbook follows the familiar pattern of polling for any new updates over HTTPS, getting update metadata over HTTPS and then downloading the actual update over HTTP. The speakers revealed that the QNX Software Development Platform is full of useful tools that they took advantage of to analyze the device. One of their main findings was that a file with weak permissions on the device leaked a huge amount of useful info, including a session token that could be used to connect to the proxy used for networking with other Blackberry devices. They also dropped a few other interesting findings including how to download any app (for free) from the Playbook app store.

Unearthing the World’s Best WebKit bugs – Michel Aubizzierre

This talk was one I was very interested to hear, as it builds on the research done by Adam Barth et al on mining public bug trackers (using Bugzilla/Firefox as an example) to identify vulnerabilities. Michel referred to the previous paper in the beginning of his talk and then went on to describe how he used a machine learning library to be able to predict if a particular commit is a vulnerability fix (and hence interesting to an attacker). The tool looks at data points such as: who committed the fix ? does the commit message reference a private bug ? does the commit message include an ‘interesting’ word such as ‘crash’ ? was the fix reviewed by a member of the security team ? etc. Michel also pointed out how nice it is (for an attacker/exploit developer, at least) that commits also often include automated tests – a free PoC to trigger the vulnerability ! He stated that while Chrome itself picks up WebKit security fixes quite quickly, other vendors and platforms are much slower – particularly on mobile where carriers have to be involved in shipping fixes.

Effective Denial of Service Attacks Against Web Application Platforms – Alexander Klink & Julian Waelde

The key finding in this presentation was that many web application platforms use a hash function that can fairly easily have collisions induced, which leads to an easy denial of service against the server. Many web application frameworks deliver HTTP request params to the applications running on them as a hash/associative array (implemented with an underlying hash table). For example something like params[‘a_param’] is a common pattern. If a request can be crafted so there are many params which all hash to the same value in the underlying hash table, this leads to the worst case performance ( O(n^2) for lookup/storage ) The speakers presented two techniques that can be used to find hash collisions : using equivalent substrings or a “meet in the middle” approach. The hashing function used by the hash table determines which technique must be used. Using their approach, they found that in practice a 500K POST request could consume ~1 min of CPU time. Alexander and Julian also discussed their experiences reporting this issue to a variety of web framework vendors. A common mitigation used by vendors was to limit the maximum number of params to lessen the amount of collisions that can be created via one request. The actual fix suggested is to use a randomized hash function that avoids collisions as much as possible (djb suggests using a treemap instead of a hash table, for what it’s worth). The speakers also made the important point that since the malicious request can just be a normal POST, it can be triggered via an XSS of a webpage – making it possible to make browsers complicit in DDoS’ing a server. Another interesting point is that Perl fixed this issue in 2003, but many other web frameworks were vulnerable to their attack until they reported the issue in 2011.

Heap of Trouble : Breaking the Linux Kernel SLOB Allocator – Dan Rosenberg

SLOB is one of three kernel heap allocators used by Linux. It’s commonly found in embedded systems which require a low memory footprint but not often used in mobile devices at the current time. As SLOB is slightly different from the other heap allocators, sometimes vulnerabilities which are difficult or impossible to be exploited otherwise can be used against systems which utilize it. Dan went into the technical details of SLOB and covered some techniques for exploiting heap overflows on a system using this allocator.

For those interested in details, check out his slides at http://vulnfactory.org/research/slob.pdf

Real World SQL Injections – Leonardo Alminara

Leonardo talked about some of the difficulties associated with exploiting blind SQL injections in the real world (Unicode languages, delays/unreliability resulting from bad network connections or having to go through many hops). He also presented some techniques to optimize and better detect SQL injections, including timing attacks and script behaviour prediction. A technique for validating results via using SQL functions themselves to checksum the retrieved data was also demonstrated.

A Sandbox Odyssey – Vincenzo Iozzo

This was another talk I was really looking forward to, and it did not disappoint ! Vincenzo’s talk discussed the OSX process sandbox, which is based on TrustedBSD and its Mandatory Access Control framework. He first dived into the implementation of the sandbox, the structure of its policies and how they are enforced by OSX . He then went on to talk about possible sandbox escape avenues – it turns out that not everything can be enforced with the current implementation, since there are some APIs that do not have hooks for sandboxing checks. Vincenzo discussed the kernel surface exposed to a sandboxed process and potential avenues of exploitation that different components may offer. Commonly in the real world, we see that the most secure policy for a process may not always result in something usable. This was demonstrated to be true yet again when reviewing some sample sandbox policies for some popular applications : they have to allow a large number of potentially dangerous operations to function correctly. As a counter example, Vincenzo showed that Chrome’s renderer process sandbox is extremely tightly locked down. He compared many sandbox policies to essentially being a chroot and gave the example of grabbing the system keychain and sending it off the machine, allowed by the sandboxes for some common processes ! He made a strong point that sometimes exfiltrating data (as in the previous example) is the ultimate goal of the attacker. This is much more difficult to prevent than escaping the ‘chroot’ or escalating privilege. Vincenzo emphasized that there is a plethora of ‘interesting data’ inside or accessible from within a browser process – such as SQLLite db’s. He demonstrated an attack along these lines to hijack a popular mail website without needing to escape the sandbox at all. Overall, this was a really awesome presentation – technical details of the sandbox itself, some solid research on how it’s used in the real world by apps, and some really thought provoking ideas and attacks that sandboxing almost certainly isn’t the right way to mitigate.

Day 2

Man vs Machine – Andrew Cushman (Microsoft)

Andrew Cushman was the keynote speaker to begin the second day of the conference. The gist of his talk was expounding his belief that the biggest current challenges in cybersecurity are in the political realm, ie. the regulation and policy space. He pointed out that countries have differing views on how the Internet should be run and (not) controlled. His talk also touched on the work Microsoft has done implementing their Secure Development Lifecycle and the results : for example, infections per 1000 machines has dropped from 20% for Windows XP to 5.7% from Windows 7 – compelling numbers ! Microsoft’s executives mandated making the SDL compulsory in 2004, beginning a slow, steady and sustained effort to improve the security of Microsoft products. Andrew’s view is that this has greatly increased the cost of finding usable vulnerabilities against Windows and other Microsoft products. By rapidly detecting and suppressing exploits and focusing on removing entire classes of vulnerabilities, the progress since the days of Code Red et al has been immense.

Proximity Card Access Systems – Brad Antoniewicz (Foundstone)

Brad discussed the security (or lack of) around access card door systems, including a demo of how easy it is to get close to someone with a valid card in their pocket and then clone their card, gaining all their allowed physical access. He explained the basics of how common access card systems work and did a cool demo of how to brute force a card reader using a custom tool he built with an Arduino microcontroller. Brad also delved into possible vulnerabilities on the back end systems used by these access card systems, including the web-based admin interface. The best demo was sending the command to the door to open itself, while performing no logging at all in the admin console !

Easy Local Windows Kernel Exploitation – Cesar Cerrudo (IOActive)

This presentation started by giving a brief recap of the state of most Windows kernel exploit techniques: they aren’t generic, don’t work across Windows versions, and execute code in kernel mode, which makes it very easy to blue screen the system. Cesar set out to find an easy, reliable, portable way to exploit Windows kernel vulnerabilities. His insight was that most of the time an attacker is not necessarily interested in executing code in ring 0, but instead in elevating privileges. Once elevated, the attacker can access any restricted data or hijack a security-critical process (for example, injecting code into LSASS to be able to obtain, and later crack, password hashes). Cesar found that he could remove the ACL of almost any Windows object, enable any privilege in a process token, or replace a process’ token entirely with a privileged token. He discussed performing any of these three attacks with just one (sometimes two) writes to the kernel and without needing to run any kernel mode shellcode whatsoever. One key technique is that Windows (at the current time at least !) hands out a lot of useful information (like the address of kernel structures) to an unprivileged user mode process via the NtQuerySystemInformation call. This is different to other operating systems where an attacker must rely on information leakage vulnerabilities to obtain this sort of information. Using this technique leads to extremely portable and reliable exploits. A caveat is that if you steal another process’ token, you have to increment the token’s refcount in the kernel – otherwise when your process using the stolen token exits, the token will be released when the refcount goes to 0, often causing the true owner’s process to crash ! Another surprising bit of info he revealed is that Windows doesn’t check the token type when doing access checks. He was able to steal an identity token (which shouldn’t allow authenticating as that user) and set that as his process’ primary token, giving it SYSTEM privileges. Cesar also pointed out there are many other things in Windows kernel structures that could be overwritten in this way via a kernel overflow; there’s definitely some more interesting research to be done here. This presentation was full of technical details and quality research.

Secrets In Your Pocket – Mark Wuergler (Immunity)

This talk started with the premise “In the battle between security and convenience, convenience is winning.” From there, Mark went on to explore the data that is revealed about a user by their wireless device, starting with examining the SSID’s a device has saved and keeps attempting to connect to. Once an attacker/stalker knows what SSID’s your particular device is looking for, this gives them some information about where you go (e.g. Starbucks) or where you work (e.g. Mozilla corporate wireless). The talk went on to cover information stored in your cookies which can be obtained by an attacker once they crack your wireless network – if they need to – by injecting sites into your HTTP traffic and winning the race against the legitimate HTTP response. Injecting content can also trigger browser password autofill as well. Mark also made the point that many sites require SSL for login pages only but not the rest of their content – meaning once you’re authenticated all sorts of information including contents of emails, social network profile data etc. is unprotected from a sniffing attacker. A demo was shown presenting an attack against Facebook application installs. The Facebook application permissions shown when asking a user if they want to install an app are implemented client side. Mark showed that an attacker could sniff a necessary token and then change the permissions requested by the app to ask for ALL permissions (without displaying this to the victim). When the user installs the application, it’s then allowed full access to any of the user’s Facebook data. The closing remark of this talk was “Information disclosures weaken the security of the organizations you are associated with” – obtaining information that could answer a secret question for a password reset was an example given. Pro-tip : turn off wireless when not using it and think about what your saved networks are broadcasting about you !

The Stack is Back – Jon Oberheide

Another immensely entertaining and educational presentation. Jon was speaking about exploiting stack overflows – not stack BUFFER overflows, but a true stack overflow where the stack itself exceeds the bounds of its allocated space. Although it seems like this sort of vulnerability is ‘impossible’ to exploit, Jon refused to admit defeat and explained and demonstrated his research on how to exploit stack overflows in the Linux kernel. The key is that while getting anything useful out of exploiting a process’ kernel stack overflow itself is difficult, an attacker can create processes until there are two with adjacent kernel stacks. The first process can overflow its kernel stack into the second process’ kernel stack, overwriting a return address there. grsec and PaX take steps to prevent this exploitation technique, while a conference attendee stated they don’t believe it’s possible to use it on Windows.

Modern Static Security Checking for C/C++ – Julien Vanegue (Microsoft)

Julien presented an overview of the problems inherent in static analysis : trying to achieve a ‘sound’ analysis (with no false negatives) that is also a ‘complete’ analysis (with no false positives) – in practice, this is often a tradeoff. He discussed the work being done at Microsoft to improve the results of static analysis on their code. MS has created a tool called HAVOC (Heap Aware Verifier for C and C++ programs). It’s a plugin for the MSVC compiler that uses manual source code annotations to aid its analysis. The annotations can be in a separate file, existing side-by-side with the source code, which avoidsthe common pain of cluttering up the code with annotations that only the static analysis tools care about. One very interesting data point was a screen shot of Task Manager on a machine being used for analysis at MS : a 48 core machine with 100 GB of RAM ! An example cited was an analysis of around 300 KLOC in the Windows kernel to see if all pointers passed to the kernel are validated (ie. point to somewhere sane in user space) – this took about six hours to run. One key difference between HAVOC and other static analysis tools is that it doesn’t do ‘state merging’. To prevent analyzing the almost infinite amount of states a program can pass through with respect to control flow, many tools ‘merge states’ when control flow reconverges, which loses a lot of information about, for example, the values variables can actually take on during the actual program’s control flow. This can lead to a multitude of false positives that require time consuming manual review. Another important point Julien made is that, although many vulns are found at MS with static analysis, fully automated analysis (without humans aiding the tools by means of annotations, for example) is not really feasible : his advice is think cyborg, not robot !

Undermining Security Barriers – further adventures with USB – Andy Davis (NGS Secure)

At this point, the conference was up against a hard deadline and things were a little behind schedule, through no fault of the organizers. That said, Andy did an outstanding job of condensing his presentation without rushing through it and still shared his interesting research into USB security. He started with a brief overview of USB and vulnerability classes commonly found in USB stacks before diving into the details of his attempt to fuzz USB by emulating it with an Arduino. Due to performance reasons, this didn’t really work super well, so he moved on to controlling some commercial USB test hardware (which runs about $1200) via some Python scripts. Using this approach, he found a crash in one of the Windows 7 USB drivers, as well as vulnerabilities in the XBOX 360, OSX and Solaris USB stacks ! One of the standout quotes from this presentation was “Some vendors don’t consider USB vulnerabilities a security issue.” (The old quote about being entirely theoretical never gets old, does it ?) Andy then released a USB device fuzzer live at the conference – it uses wxPython/libUSB/pyUSB and has already found some bugs (memory corruptions in iOS5 on the iPhone 4, for example).

You can check out the tool and research at http://ngssecure.com/research/infiltrate

Don’t Hassle The Hoff – Breaking iOS Code Signing – Charlie Miller (Accuvant)

Charlie first gave a brief overview of how code signing is enforced on the iPhone – the crux of the matter is that pages can’t be both writeable and executable. This allows Apple to review apps and find malware, since whatever app they review can’t be self modifying. In essence, as Charlie put it “Apple acts as your AV”. He went over the actual implementation of the page protection checks and signing enforcement in great detail, using public code where available or approximate code he reverse engineered where needed. It turns out that there’s an exception to the W+X restriction : pages allocated for the purpose of JIT compiling Javascript. However, this special ‘allocate for JIT’ flag is only allowed to be used when an application’s signed plist file contains a special entitlement. This entitlement is currently only given to Mobile Safari and other apps that try to claim it won’t be approved by Apple obviously. This special ‘JIT allocate’ entitlement only allows the process to create a single W+X page once. As an aside, it was also mentioned how most current jailbreaks work : patching out the checks to ensure a process has no W+X pages, disabling checking the hashes of binaries and having the signed binary check always return true. Moving on, Charlie described how he spotted an implementation flaw in the checks around mapping a W+X memory segment and proceeded to try and exploit it. In the end he submitted two apps to the Apple app store that used this technique. The first one was rejected for not being ‘useful enough’ – it was a Daily Hasselhoff app, hence the talk title. The second one got approved and was in the app store for a few months. While Charlie took careful precautions to ensure that his app would only load and execute dynamic code for very specific periods of time (when he was demo’ing it, for example), he was still banned from Apple’s developer program ‘for at least a year’. This was a really awesome presentation, full of great technical details – it was obvious that a lot of effort went into this research. Also entertaining was how many times Charlie mentioned he was no longer an official Apple Developer. A very fitting end to the conference.

In closing, I’d like to give a shout out to the INFILTRATE team for the great work they did putting the conference together, hosting it, and handling with humour the inevitable (small) hiccups that come with running an event (and also for providing an amazing dinner for the conference attendees Thursday night). Although in my work at Mozilla, I mostly focus on defense in the browser, I find it invaluable to see what sorts of attacks people are researching and the techniques they are using while investigating them. So often I also find one can apply research from one area of security to another area relevant to one’s work, and then use this to fuel understanding of potential security problems that need to be investigated and discussed. Overall, this conference was exactly the blend of a single track of sessions, a smaller group of extremely interested attendees, and highly technical-focused presentations that makes a good conference for me.

Jan 12

Hello world!

Hello, this is a new blog I am starting to discuss my work for the Mozilla project, security, and the intersection of the two.

You can also follow me on twitter.