12
Jun 11

Regarding your Baby

Having been at Mozilla for some time now, I’m still fascinated by the varying perceptions people have of security reviews. To some developers it feels like the Spanish Inquisition (minus the comfy pillows), while to others its an opportunity to uncover potential issues they may have missed during the design or implementation process. The interesting thing is that our approach is pretty much the same for every review.

But has become evident that we need to do a better job of defining and communicating both the structure and value of the process in a way the help each developer maximize the value of their reviews. Among those changes, we are going to try a more structured approach to how we run the meetings.

For a typical 60 minute design review, we will break up the time as follows:

Introduction by the feature team (5-10 minutes)

  1. Goal of the feature. What outcome is it trying to achieve: problems solved, uses cases enabled, etc.
  2. What solutions/approaches were considered
  3. Why was this solution chosen
  4. Any security threats or issues that were considered in the design

The purpose here is to help the security team understand the fundamental motivations and purpose of the feature, thereby setting the correct context for the rest of the conversation. If we don’t know why a feature is proposed, it becomes hard to justify (any) risk. That said, this is not the time to critique the feature. Comments and questions should be saved for later unless they directly relate to understanding the feature better.

Threat brainstorm (30-40 minutes)
A truly open brainstorming session on potential threats the feature could face or introduce. Like any good brainstorming it will involve some really interesting ideas as well as some really silly ones. The goal is not to make judgments during this phase; it is to test. Its vital for the feature team to participate in a critical analysis (in the classical sense) of their own feature.

Rather than feeling like they need to defend their feature, the feature team should strive to “swap sides” and think like an attacker. The value is to help teams understand how they can go through this questioning process themselves, but also realistically there are very few features we know enough about to go through this process w/o the feature team being represented. The ability to objectively critique one’s baby is a difficult but very valuable skill.

Conclusions and action items (10-20 minutes).
Summarize the threats uncovered and recommend and prioritize the mitigation to each. Identify parts of the feature necessary for followup security work, which may include detailed threat modeling, source code review, targeted fuzzing, etc.

Formal security reviews are not our only tool of course. In addition to our ongoing fuzzing and bug bounty programs–and the code review patches goes through anyway–we also use other approaches such as embedding security team members into the most complex projects. However that approach clearly does not scale to the nearly the number of projects we have. The best approach is often a series of small conversations. Lightweight interactions can be a great way to bounce ideas and get feedback, helping a team to crystallize specific aspects of a feature and prepare them for a detailed review in the future.

But the most important characteristic of any type of security interaction is that it happens as early as possible. The earlier we talk, the more options we have to address any significant shortcomings while still shipping what you want, when you want to. The later we talk, the smaller our toolbox becomes, eventually coming down to a single boolean lever: do we ship or not?


11
Jun 11

Choosing Security

Some of the most common reasons I hear from people for coming to Mozilla are “I want to have an impact”, “I want to work on things that matter” and “I want my work to touch lots of people”. Many of us have worked on projects independently and struggled to get anyone to notice, much less care.

Mozilla is a huge platform, a megaphone, for getting noticed. It is due to the trust in the Mozilla brand itself as well as direct access to hundreds of millions of users through our established products. As such its an incredibly appealing avenue to having the impact we desire.

However, with that great power comes great responsibility. Utilizing Mozilla’s brand and reach comes with the duty of ensuring that we are not putting our users at risk and undermining trust in our brand and existing products. That duty is reflected in additional scrutiny and reviews that you would not be subject to if you were doing something completely independently.

Our goal during each project lifecycle is trying to help each team have the impact they want. The best way to do so is to engage us early and often in your project, and listen to our feedback. We can help you understand the concerns and challenges you could face not just from Mozilla as an organization, but also from our users, web developers, website admins and the security community. Engaging us proactively maximizes your chances of shipping what you want, when you want to.

Conversely, ignoring recommendations, trying to delay or barrel through the review process or simply bypassing it entirely by releasing stuff through novel channels will likely end in an outcome very different from what you desire.

Please choose wisely!


07
Jun 11

The Uber-Fuzzer

A few weeks ago I had the chance to speak at a panel at the Hack in the Box conference in Amsterdam. For those of you not familiar with the Hack in the Box organization, its a great bunch of people who volunteer their time to put together a solid conference. The panel I was on discussed the “Economics of Vulnerabilities” and it focused primarily on the various ways organizations can recognize and compensate independent security researchers. It was a very interesting discussion, and I thank Katie Moussouris from Microsoft, Steve Adegbite from Adobe, Adrian Stone from RIM, Aaron Portnoy from ZDI and Chris Evans from Google for representing.

Since Mozilla has had a bounty program since 2004 (and Netscape started its bounty back in 1995) we obviously have some rather strong opinions about what works. :) Its been great seeing other software companies adopt various types of security bounty programs: Google (with great enthusiasm), Deutsche Post, Barracuda and others. The economics in our case are pretty straightforward: a researcher who submits to the Mozilla security bug bounty program gets a $3000 reward for every qualifying client bug they find, or between $500 and $3000 for each qualifying web bug. We are not buying a researcher’s silence however; we are offering a reward for constructive security research. There are no contracts or confidentiality clauses to sign. Of course, prompt payment and public attribution are also very important. :)

No discussion of vulnerability economics can ignore the grey elephant in the room: underground markets. Whether the color of those markets is black, grey or taupe, the fundamental objective of those buyers is to buy vulnerabilities to use as tools… implements… ok, weapons to achieve specific tactical or strategic objectives, rather than to fix those issues and protect all users. An interesting tidbit that came out during the discussion is that now researchers on those markets are being paid on an ongoing basis for as long as the vulnerability remains non-public and unfixed. This clearly is intended to minimize the odds the vendor will be able to discover and fix the issue. Something to keep in mind if you choose to go down that route.

The other thing to keep in mind with the underground markets is that they are paying for a fully reliable, weaponized exploit. In most cases this is an order of magnitude more work than simply finding a bug, and frankly something that very few researchers can actually achieve (per Aaron Portnoy of ZDI). At Mozilla we don’t need–or even want, honestly–a working exploit. We just need sufficient detail to understand and locate the bug. In most cases a simple test case demonstrating memory corruption or an assertion, or just referencing the offending lines of code is enough. Meaning a whole lot less hassle for the researcher.

This all has some rather profound implications for vendors. No longer can one expect that a zero-day will be monetized through rootkits that get sprayed across the internet, quickly alerting the vendor to the issue and allowing for a fix. Every incentive seems aligned to keep these bugs off the radar for as long as possible, meaning a quick response is no longer a sufficient primary strategy. Vendors must pursue a wide variety of means to find and fix all of those issues proactively, and not sit on bugs under the false hope that nobody else will find them. A bug bounty is a critical part of that strategy for Mozilla. It works for the same reason fuzzing works: it maximizes the potential set of inputs into the problem, greatly improving the chances of finding security bugs through unique and innovative means. Security bounty programs are the “uber-fuzzer”.