Revoking Trust in one ANSSI Certificate

kwilson

16

Last week, Mozilla was notified that an intermediate certificate, which chains up to a root included in Mozilla’s root store, was loaded into a man-in-the-middle (MITM) traffic management device. It was then used, during the process of inspecting traffic, to generate certificates for domains the device owner does not legitimately own or control. While this is not a Firefox-specific issue, to protect our users we are a updating the certificate store of Firefox in order to dis-trust these certificates. The Certificate Authority (CA) has told us that this action was not permitted by their policies and practices, and they have revoked the intermediate certificate that signed the certificate for the traffic management device.

Issue

ANSSI (Agence nationale de la sécurité des systèmes d’information) is the French Network and Information Security Agency, a part of the French Government. ANSSI (formerly known as DCSSI) operates the “IGC/A” root certificate that is included in NSS, and issues certificates for French Government websites that are used by the general public. The root certificate has an Issuer field with “O = PM/SGDN”, “OU = DCSSI”, and “CN = IGC/A”.

A subordinate CA of ANSSI issued an intermediate certificate that they installed on a network monitoring device, which enabled the device to act as a MITM of domains or websites that the certificate holder did not own or control. Mozilla’s CA Certificate Policy prohibits certificates from being used in this manner when they chain up to a root certificate in Mozilla’s CA program.

Impact

An intermediate certificate that is used for MITM allows the holder of the certificate to decrypt and monitor communication within their network between the user and any website without browser warnings being triggered. An attacker armed with a fraudulent SSL certificate and an ability to control their victim’s network could impersonate websites in a way that would be undetectable to most users. Such certificates could deceive users into trusting websites appearing to originate from the domain owners, but actually containing malicious content or software.

We believe that this MITM instance was limited to the subordinate CA’s internal network.

Status

Mozilla is actively revoking trust of the subordinate CA certificate that was mis-used to generate the certificate used by the network appliance. This change will be released to all supported versions of Firefox in the updates this week.

Additional action regarding this CA will be discussed in the mozilla.dev.security.policy forum.

End-user Action

We recommend that all users upgrade to the latest version of Firefox. Firefox 26 and Firefox 24 ESR both contain the fix for this issue, and will be released this week.

Credit

Thanks to Google for reporting this issue to us.

Kathleen Wilson
Module Owner of Mozilla’s CA Certificates Module

Navigating the TLS landscape

Julien Vehent

1

A few weeks ago, we enabled Perfect Forward Secrecy on https://www.mozilla.org [1]. Simultaneously, we published our guidelines for configuring TLS on the server side. In this blog post, we want to discuss some of the SSL/TLS work that the Operations Security (OpSec) team has been busy with.

For operational teams, configuring SSL/TLS on servers is becoming increasingly complex. BEAST, LUCKY13, CRIME, BREACH and RC4 are examples of a fast moving security landscape, that made recommendations from a only few months ago already obsolete.

Mozilla’s infrastructure is growing fast. We are adding new services for Firefox and Firefox OS, in addition to an ever increasing number of smaller projects and experiments. The teams tasked with deploying and maintaining these services need help sorting through known TLS issues and academic research. So, for the past few months, OpSec has been doing a review of the state-of-the-art of TLS. This is in parallel and complementary to work by the Security Engineering team on cipher preferences in Firefox. The end goal being to support, at the infrastructure level, the security features championed by Firefox.

We published our guidelines at https://wiki.mozilla.org/Security/Server_Side_TLS. The document is a quick reference and a training guide for engineers. There is a strong demand for a standard ciphersuite that can be copied directly into configuration files. But we also wanted to publish the building blocks of this ciphersuite, and explain why a given cipher is prefered to another. These building blocks are the core of the ciphersuite discussion, and will be used as references when new attacks are discovered.

Another important aspect of the guideline is the need to be broad, we want people to be able to reach https://mozilla.org and access Mozilla’s services from anywhere. For this reason, SSLv3 is still part of the recommended configuration. However, ciphers that are deprecated, and no longer needed for backward compatibility are disabled. DSA ciphers are included in the list as well, even though almost no-one uses DSA certificates right now, but might in the future.

At the core of our effort is a strong push toward Perfect Forward Secrecy (PFS) and OCSP stapling.

PFS improves secrecy in the long run, and will become the de-facto cipher in all browsers. But it comes with new challenges: the handshake takes longer, due to the key exchange, and a new parameter (dhparam/ecparam) is needed. Ideally, the extra-parameter should provide the same level of security as the RSA key does. But we found that old client libraries, such as Java 6, are not compatible with larger parameter sizes. This is a problem we cannot solve server-side, because the client has no way to tell the server which parameter sizes it supports. As a result, the server will start the PFS handshake, and the client will fail in the middle of the handshake. Without a way for the handshake to fall back and continue, we have to use smaller parameter sizes until old libraries can be deprecated.

OCSP stapling is a big performance improvement. OCSP requests to third party resolvers block the TLS Handshake, directly impacting the user’s perception of page opening time. Recent web servers can now cache the OCSP response and serve it directly, saving the round trip to the client. OCSP stapling is likely to become an important feature of Browsers in the near future, because it improves performances, and reduces the cost of running worldwide OCSP responders for Certificate Authorities.

OpSec will maintain this document by keeping it up to date with changes in the TLS landscape. We are using it to drive changes in Mozilla’s infrastructure. This is not a trivial task, as TLS is only one piece of the complex puzzle of providing web connectivity to large websites. We found that very few products provide the full set of features we need, and most operating systems don’t provide the latest TLS versions and ciphers. This is a step forward, but it will take some time until we provide first class TLS across the board.

Feel free to use, share and discuss these guidelines. We welcome feedback from the security and cryptography communities. Comments can be posted on the discussion section of the wiki page, submitted to the dev-tech-crypto mailing list, posted on Bugzilla, or in #security on IRC. This a public resource, meant to improve the usage of HTTPS on the Internet.

[1] bug 914065

Learning From a Recent Security Vulnerability in Persona

Lloyd Hilaiel

The purpose of our “Bug Bounty Program” is to encourage contributors to test and experiment with our code for the purposes of improving its functionality, security and robustness. Through this program we were recently alerted to a potential security flaw in one of our web services products, Persona.

In short, the issue reported could have allowed an attacker to impersonate any gmail.com or yahoo.com user on a website that supports Persona.  We have no evidence that it was exploited, and the issue has been resolved in production.  You can read a summary of the timeline in our recent disclosure.

Background: Persona and Identity Bridging

 

To understand the vulnerability, a little background on Persona is required:  Persona is a federated protocol which allows a user to verify their ownership of an email address by directly authenticating with their email provider.

If a user’s email provider does not support Persona, Mozilla can temporarily act as a trusted third party and vouch for the user. Mozilla will only vouch for a user if they can prove ownership of their email address, often by clicking a confirmation link mailed to that address. To streamline this process, Mozilla recently introduced a feature known as “Identity Bridging.” Bridging allows Mozilla to use existing public APIs, like OAuth or OpenID, to verify a user’s identity directly in the browser, without needing to send a confirmation email.

You can learn more about Identity Bridging in the original announcement. Generally, bridges are designed as a temporary measure. They allow Mozilla to deliver a streamlined login experience in advance of email providers supporting Persona themselves. Currently, Mozilla operates two bridges, one for Google and one for Yahoo users, both based on OpenID.

When logging into a website, Persona prompts the user to type an email address. Persona then performs discovery against the email’s domain to determine if the given email provider supports Persona.

If so, the user is sent directly to the provider to authenticate. If not, the user is sent to Mozilla’s fallback.

The fallback maintains a list of email domains and associated identity bridges. If a bridge is available, the user is sent to that bridge to authenticate. If not, Mozilla sends the user a confirmation email to verify ownership of the address.

Authenticating at a bridge occurs as a normal OpenID transaction:

  1. The bridge redirects users to the OpenID endpoint of the respective provider, requesting confirmation of ownership of an email.
  2. Subsequent to user interaction, the user is redirected back to the bridge with information supplied by the email provider in GET parameters.
  3. Online server-server verification of the authenticity of the request parameters is performed.
  4. Once verified, the user’s email address is considered confirmed, and the bridge issues a certificate which allows the user to assert ownership in a separate offline operation.

 

What Went Wrong?

 

The flaw that was discovered was in step #4 above.  Using OpenID to verify a user’s ownership of an email address is tricky process.  OpenID Attribute Exchange is the standard used that allows an email provider (an identity provider or IdP) to encode arbitrary attributes (such as email addresses) in the GET parameters of the URL of the website using OpenID (the relying party, or RP).  OpenID authentication further allows the IdP to provide a signature that covers a subset of the returned values.

The vulnerability could have allowed a malicious user to trick Persona into trusting unsigned parameters. To demonstrate the nature of related exploits, let’s explore a valid response and associated, simplified attacks:

 

A Valid OpenID Provider Response

 

To begin, in order to successfully confirm ownership of an email address via OpenID, one must verify that the OpenID endpoint is authoritative for the email in question.  For gmail.com addresses, Google is authoritative – for yahoo.com addresses, Yahoo is authoritative.

Once you’ve identified the correct endpoint for the email in question, the simplified example below (many required fields are removed) demonstrates a valid verification response:

  openid.op_endpoint: https://www.google.com/accounts/o8/ud
  openid.signed: op_endpoint,ns.ext1,ext1.mode,ext1.type.email,ext1.value.email
  openid.sig: <base 64 encoded signature>
  openid.ns.ext1: http://openid.net/srv/ax/1.0
  openid.ext1.mode: fetch_response
  openid.ext1.type.email: http://axschema.org/contact/email
  openid.ext1.value.email: example@gmail.com

In this response, the openid.sig parameter contains a signature that covers the fields enumerated in the openid.signed parameter.  Included in the signature are the openid.ext1 fields, which contain the email address of the user at Gmail.  Popular OpenID providers provide a service that allows online validation of signatures. However verification of a signature is not enough, and many popular OpenID libraries may appear deceptively simple.

To explore the nature of data validation required, let’s walk through a couple possible attacks.

 

Attack #1: Unsigned Email Attribute

 

Consider the following hypothetical return parameters from the IdP:

  openid.op_endpoint: https://www.google.com/accounts/o8/ud
  openid.signed: op_endpoint
  openid.sig: <base 64 encoded signature>
  openid.ns.ext1: http://openid.net/srv/ax/1.0
  openid.ext1.mode: fetch_response
  openid.ext1.type.email: http://axschema.org/contact/email
  openid.ext1.value.email: example@gmail.com

This set of response data has a valid signature, but the signature does not cover the email address returned by the IdP. Many popular OpenID libraries will return a response to calling code that is identical, regardless of whether return attributes are signed.  This would allow an attacker to generate a valid OpenID response which does not include an email address, and then append their own claimed email address.  This simple attack combined with behavior common in OpenID libraries can lead to vulnerabilities.

 

Attack #2: Data Extraction Flaws

 

Depending on the RP’s OpenID implementation, it’s possible that an attacker can spoof an email address despite the presence of expected values that are valid and signed.  Consider the following:

  openid.ns.foo: http://openid.net/srv/ax/1.0
  openid.foo.mode: fetch_response
  openid.foo.type.email: http://axschema.org/contact/email
  openid.foo.value.email: victim@gmail.com
  openid.op_endpoint: https://www.google.com/accounts/o8/ud
  openid.signed: op_endpoint,ns.ext1,ext1.mode,ext1.type.email,ext1.value.email
  openid.sig: 
  openid.ns.ext1: http://openid.net/srv/ax/1.0
  openid.ext1.mode: fetch_response
  openid.ext1.type.email: http://axschema.org/contact/email
  openid.ext1.value.email: example@gmail.com

In this case, there is a proper signed email present in the response, however an unsigned email also precedes it.  Online verification will succeed, and depending on how value extraction is implemented, this might be enough to cause the incorrect (unsigned) email to be extracted and returned to calling code, resulting in unauthorized access to victim@gmail.com‘s account.  In a popular OpenID implementation we found a lack of rigor in value extraction that could lead to vulnerabilities of this nature.

 

How Can Others Avoid These Pitfalls?

 

The simplified attacks in this post give you a flavor of the attention to detail required when using OpenID to confirm email ownership.  The same cautions apply to any sensitive information that you rely on that is returned from OpenID.

Our suggestions are twofold:

For library authors: Go the extra mile.  If you perform extraction of values returned by the IdP, perform rigorous validation, and expose the trust level of the return value.  For instance, in JavaScript, rather than returning an email property,  scope that property under an .unsigned or .untrusted namespace.  Make it blindingly obvious what values are, and are not, trustworthy.

For site owners using OpenID:  Give your implementation a deep look.  Ensure that identifying values are signed, types are as you expect, and parsing is rigorous.  For the IdPs that you use, research their suggested best practices.

 

Parting Thoughts

 

This was a serious vulnerability in Persona, however, our evidence indicates it was addressed before users were affected.  Moving forward, we will be conducting a review of the bridging code and the third-party libraries it uses, and our Bug Bounty program will continue to be a vehicle for researches to disclose vulnerabilities in our critical services. Finally, we will continue to address discovered vulnerabilities with all due effort, and openly and responsibly disclose them.

Bug Bounty Program Finds and Helps Resolve Security Vulnerability in Persona

mcoates

The purpose of our “Bug Bounty Program” is to encourage contributors to test and experiment with our code for the purposes of improving its functionality, security and robustness. Through this program we were recently alerted to a potential security flaw in one of our web services products.

Issue

On Tuesday, September 24th Mozilla was notified by a security researcher of a vulnerability within the Persona service that could potentially have allowed an attacker to authenticate to a Persona enabled website using the identity of an existing gmail or yahoo account.

As of Tuesday, October 1st, we’ve deployed updates to Persona to fully address this security concern. We also reviewed available log data from Sept 10 through October 2nd and confirmed that this flaw has not been used to target any users.

Impact

The vulnerability could have allowed a malicious attacker to authenticate to a Persona enabled website using the identity of an existing gmail or yahoo account.

Note: This issue only impacted the Persona service and sites that implement Persona. This vulnerability has no bearing on the security of a user’s gmail or yahoo email service.

Status

Mozilla immediately investigated and tested patches to address this issue. Initial patches to Persona were deployed on Friday, September 27th and additional patches for an identified edge case were deployed on Tuesday, October 1st.

The vulnerability that led to this issue was created by incorrect assumptions of behavior and security with two third party libraries. We’ve captured these details more fully in a technical post on the issue authored by Lloyd Hilaiel.

Credit for discovery of this issue goes to
Daniel Fett, Ralf Kuesters, and Guido Schmitz,
researchers at the Chair of Information Security and Cryptography,
University of Trier, Germany.

HITBSecConf HackWeekDay 2013

Paul Theriault

HITBSecConf HackWeekDay 2013

Mozilla is proud to be once again sponsoring HackWeekDay at the Hack-in-the-Box security conference in Malaysia in October. The event is a chance for developers – both students and professional – to come together and prototype new apps and features for Firefox OS – security-related or otherwise. We want to build on the already strong community in the region and encourage the open-source community to support the future of the mobile web.

For details of the prizes on offer and how to get involved, see the HackWeekDay page on the HITB website.

What are we hoping to achieve?

We want you to help push the Firefox OS platform forward. This competition is being sponsored by the security team at Mozilla but you can hack on any app that is interesting to you. And not just apps – we would love to see people developing new features for Firefox OS itself. A group of Mozillians will be there to help developers test their entries on Firefox OS devices and award prizes (including Firefox OS phones!)

How can people get prepared?

Interested developers who want to get started should get familiar with how to develop Apps for Firefox OS. Download the simulator, and make a basic app – this tutorial provides the basics of building apps for Firefox OS.

Got the basics down? Have a read of this page for tips on how to make fast mobile apps – this is critical for mobile and even more so on Firefox OS which targets more affordable, and therefore, lower-powered handsets.

Feeling strong with Apps, or just looking for inspiration? Dive into the Firefox OS front-end code. Everything the user sees with Firefox OS is actually written in the same web technologies used for Apps: HTML, JavaScript and CSS. Exploring the Firefox OS front-end, code-named  “Gaia“, is a great way to get a deeper understanding of the platform.

If you want to develop on Firefox OS itself, see the Hacking on Gaia page on MDN.

What are we looking for in the entries?

  • Prototypes which effectively demonstrate a new or interesting feature
  • Innovative use of new Web APIs
  • High quality execution (especially on the constraints of mobile)
  • Increase the security and privacy of Firefox OS users

What about others who can’t attend the conference?

  • Make apps for the Mozilla marketplace
  • Volunteer to help Mozilla security team on Firefox OS (or anything else) (come find us on irc://irc.mozilla.org/security or email security@mozilla.org)

Will the community be there?

Yes! Mozilla Malaysia is planning a presence at the event.

Introducing html2dom, an alternative to setting innerHTML

Frederik Braun

1

Having spent significant time to review the source code of some Firefox OS core apps, I noticed that a lot of developers like to use innerHTML (or insertAdjacentHTML). It is indeed a useful API to insert HTML from a given string without hand-crafting objects for each and every node you want to insert into the DOM.
The dilemma begins however, when this is not a hardcoded string but something which is constructed dynamically. If the string contains user input (or something from a malicious third-party – be it app or website), it may as well insert and change application logic (Cross-Site Scripting): The typical example would be a <script> tag that runs code on the attacker’s behalf and reads, modifies or forwards the current content to a third-party. CSP, which we use in Firefox OS, can only mitigate some of these attacks, but certainly not all.

Using innerHTML is bad (Hint: DOM XSS)

What’s also frustrating about these pieces of code is that analyzing it requires you to manually trace every function call and variable back to its definition to see whether it is indeed tainted by user input.

With code changing frequently those reviews don’t really scale. One possible approach is to avoid using innerHTML for good. Even though this idea sounds a bit naive, I have dived into the world of automated HTML parsing and code generation to see how feasible it is.

Enter html2dom

For the sake of experimentation (and solving this neatly self-contained problem), I have created html2dom. html2dom is a tiny library that accepts a HTML string and returns alternative JavaScript source code. Example:

<p id="greeting">Hello <b>World</b></p>

Will yield this (as a string).

var docFragment = document.createDocumentFragment();
// this fragment contains all DOM nodes
var greeting = document.createElement('P');
greeting.setAttribute("id", "greeting");
docFragment.appendChild(greeting);
var text = document.createTextNode("Hello ");
greeting.appendChild(text);
var b = document.createElement('B');
greeting.appendChild(b);
var text_0 = document.createTextNode("World");
b.appendChild(text_0);

As you can see, html2dom tries to use meaningful variable names to make the code readable. If you want, you can try the demo here. Now we could also just replace the "World" string with a JavaScript variable. It cannot do any harm as it is always rendered as text.

When it comes to HTML parsers, you also don’t want to write your own.

Luckily, there are numerous very useful APIs which helped making the development of html2dom fairly easy. First there is the DOMParser API which took care about all HTML parsing. Using the DOM tree output, I could just iterate over all nodes and their children to emit a specific piece of JavaScript depending on its type (e.g., HTML or Text). For this, the nodeIterator turned out really valuable.

I have also written a few unit tests, so if you want to start messing with my code, I suggest you start by checking them out right away.

Known Bugs & Security

This tool doesn’t really save you from all of your troubles. But if you can, make sure that the user input is always somewhere in a text node, then html2dom can prevent you from a great deal of harm. Give it a try!

On the horizon

I have also been looking at attempts to rewrite potentially dangerous JavaScript automatically. This is at an early stage and still experimental but you can look at a prototype here

A New Focus on Security in the Web Console

Garrett Robinson

Web developers need better tools to help them debug security issues. The Web Console, part of the Firefox Developer Tools, shows errors and warnings filtered into different categories. Firefox 23 adds a new category of messages to the Web Console: Security messages.

Toggle buttons for categories of messages in the Web Console

Toggle buttons for categories of messages in the Web Console

The Security toggle button and messages are red to warn developers, since some of these messages indicate that your site has a security vulnerability.

Once we had a dedicated place for security messages, we had to decide what kinds of issues should be reported to developers. Ivan Alagenchev, a security engineering intern, spent the summer improving security reporting to fulfill the following goals:

  1. Warn developers about altered site behavior that is due to a security feature (for example, resource loads blocked by the Mixed Content Blocker or the Same Origin Policy).
  2. Warn developers about mistakes made in implementing security features (for example, using deprecated CSP headers, or mistyping an HSTS header).
  3. Warn developers about common security risks (for example, putting password fields on insecure pages).

Here are example screenshots of some of the new Security messages:

Errors for blocked mixed content in the Web Console.

Warnings for loading mixed content

Warning for detected password field on an insecure page.

Warning for detected password field on an insecure page.

These specific messages are available to current Nightly users and will be part of upcoming stable releases.

While security should be of paramount importance to any developer, it is a complex subject that is not always part of a web developer’s education and often appears at inconvenient times. This new messaging helps developers find security-related problems early on in the development life cycle so they can be resolved quickly and effectively.

Additionally, these messages help educate developers about common issues in web security. Many of the new messages end with a “Learn More” link that takes you to a wiki with background information and advice for mitigating the security issue.

Bug 863874 is the meta-bug for logging relevant security messages to the Web Console. If you have more ideas for useful features like the ones discussed here, or are interested in contributing, check out the metabug and its dependencies!

Writing Minion Plugins

yboily

The following blog post is contributed by Yeuk Hon, an intern who has been with the Security Automation team at Mozilla over the summer. Today is his last day with Mozilla, and this post serves as a tutorial on how to write Minion Plugins. As an aside, I would also like to thank Yeuk Hon for his awesome work over the summer! – Yvan Boily

Hello, the Web! I am Yeuk Hon, a summer intern working with the Security Assurance team. In this blog post, I will go over how a Minion plugin works and how to write a Minion plugin that works for your tool. Before you dive into my blog post, I encourage you to read Yvan’s Introducing Minion if you are not familiar with Minion already.

To recap briefly, Minion was created to make a web platform where developers can kick off active vulnerability scans against their own sites. The ability to execute a Python script to invoke other tools and applications makes Minion powerful, easy to use and extend.

Minion Recap

Minion’s workflow is quite simple. A developer would come on Minion, select a plan for a site, click the scan button and then wait for the scan report to come back. A Minion plan is a JSON document containing a list of workflows. A workflow is a JSON hashtable (or dictionary in Python) that specifies which Minion plugin and what configuration parameters to use. In essence, you can run a scan using a single Minion plugin or multiple Minion plugins with multiple configurations.

Here is an example of a Minion plan:

[
   {
      "configuration": {},
      "description": "Check to see if Set-Cookie has HttpOnly and secure flag enabled",
      "plugin_name": "minion.plugins.setcookie.SetCookiePlugin"
   },
   {
      "configuration": {
         "auth": {
            "type": "basic",
            "username": "foo",
            "password": "bar"
         },
         "scan": true
      },
      "description": "Run an active scan using X scanner",
      "plugin_name": "minion.plugins.example.ExampleScanner"
   }
]

This example plan will use two plugins. The first plugin does not expect any additional configurations while the second plugin, “ExampleScanner” is told to do a scan and the scanner is given the basic auth login. Configuration parameters can vary from one plugin to another, but we will try to document common plugin configuration patterns. For example, minion-zap-plugin and minion-skipfish-plugin both use the same authentication configuration pattern as shown above.

Plugin execution

There are two types of plugins in general: blocking plugins and external process plugins. Blocking plugins are Python scripts that do not invoke external processes like nmap. External process plugins use external processes like nmap to do the actual scan work, but the Python script helps spawning, collecting and returning results back to the Minion backend.

Minion’s backend uses the Twisted framework to drive events. To keep track of states, Minion uses RabbitMQ as broker and celery workers for queuing, state bookkeeping. and tasks execution. Below is a simplified diagram showing the plugin configuration, activation, and completion in the backend.

minion-plugin

The main takeaways from the diagram are:

  1. Minion’s backend spawns a process using Python’s subprocess called minion-plugin-runner.
  2. The runner will invoke the plugin’s “do_start” method to run the plugin. A Blocking plugin is handled by calling Twisted’s “deferThread” and an external process plugins invokes an external process via Twisted’s “spawnProcess”.

If you have used Twisted before, you probably realized that method names like “do_start” are Twisted conventions. As a plugin author, knowing Twisted can be helpful and so we recommend you to check out this Twisted guide.

Example 1: SetCookiePlugin

That’s a lot of information to digest so let’s look at an actual plugin. We will continue with the Set-Cookie plugin we listed in our example Minion plan. This snippet shows part of the plugin, but you can find the full source code here.

import requests
from minion.plugins.base import BlockingPlugin

class SetCookiePlugin(BlockingPlugin):
    PLUGIN_NAME = "SetCookie"
    PLUGIN_VERSION = "0.1"

    FURTHER_INFO = [ {"URL": "http://msdn.microsoft.com/en-us/library/windows/desktop/aa384321%28v=vs.85%29.aspx", "Title": "MSDN - HTTP Cookies"} ]

    def do_run(self):
        r = requests.get(self.configuration['target'])
        if 'set-cookie' not in r.headers:
            return self.report_issues([
                {'Summary': "Site has no Set-Cookie header",
                'Description': "The Set-Cookie header is sent by the server in response to an HTTP request, which is used to create a cookie on the user's system.",
                'Severity': "Info",
                "URLs": [ {"URL": None, "Extra": None} ],
                "FurtherInfo": self.FURTHER_INFO}])
        else:
            # take care of cases where
            # (1) HttpOnly flag is not set,
            # (2) secure flag is not set
            # (3) both flags ARE set

We first set the name of the plugin and the version of the plugin. Since the list of references (we called them further info in plugin) are static, we can make them class static member variable at the class level. We will talk about what BlockingPlugin class does later, but for the meantime, all we have to know do for this plugin is to override the do_run method. We just check to see if “set-cookie” is in the response header and then report our observation back to Minion in Minion’s report format. If set-cookie is not present, there is no risk so the level of severity is hardcoded to Info.

The report scheme looks like this:

[
    {
        "Summary": "One sentence description of the issue (required)",
        "Description": "In-depth description of the issue and why the issue matters (required)",
        "Solution": "Mitigations (optional)",
        "Severity": "High/Medium/Low/Info/Error/Fatal (required)",
        "URLs": [
        {
            "URL": "http://target_site.com/path1",
            "Extra": "Extra information on why this particular URL is affected"
        }],
        "FurtherInfo": [
        {
            "URL": "http://reference1.com/",
            "Title": "Reference read title 1"
        }]
}
]

The URLs are also optional and used to indicate what parts of the site are affected by the same issue. The values of severity level are High, Medium, Low, Info, Error and Fatal. Solution is optional if the issue doesn’t need to offer a solution. For example, when severity level is Info you don’t need any solution.

To implement a plugin, there are a few things to do.

  • A plugin must be a class and the top of the class chain should be “minion.plugins.base.AbstractPlugin“
  • “do_configure“, “do_start“, “do_stop“ methods are implemented
  • use self.report_issue to return a list of issues in JSON format
  • the “self.configuration“ contains all the configuration parameters passed from Minion plan in addition to the site url (which is defined as self.configuration[‘target’]).

Plugin classes

We spoke earlier we generalized plugins into blocking and external process, Minion is shipped with “BlockingPlugin“ and “ExternaProcessPlugin“ out of the box for plugin authors to use:

  1. “minion.plugins.base.BlockingPlugin“ is used as a parent class for plugins written in pure Python that don’t require running a subprocess.
  2. * “minion.plugins.base.ExternalProcessPlugin“ is used as a parent class for plugins that require launching an external process.

You can create your own plugin type by subclassing “AbstractPlugin“ (or further subclass from “BlockingPlugin“ and “ExteranlProessPlugin“) if you have to.

Example 2: SetCookieScannerPlugin

Here is the full source code running a Go program which mirrors the SetCookie plugin we spoke earlier.

from minion.plugins.base import ExternalProcessPlugin

# Set-Cookie checker by running setcookie scanner written in Go
class SetCookieScannerPlugin(ExternalProcessPlugin):
    PLUGIN_NAME = "SetCookieScanner"
    PLUGIN_VERSION = "0.1"

    def do_start(self):
        scanner_path = self.locate_program("setcookie_scanner")
        if not scanner_path:
            raise Exception("Cannot find setcookie_scanner program.")

        self.stdout = ""
        self.stderr = ""

        # spawn by calling the executable and a list of args
        self.spawn(scanner_path, [self.configuration['target']])

    def do_procss_stdout(self, data):
        self.stdout += data

    def do_process_stderr(self, data):
        self.stderr += data

    def do_process_ended(self, process_status):
        if self.stopping and process_statsu == 9:
            self.report_finish("STOPPED")
        elif process_status == 0:
            # try to convert the JSON outputs in stdout
            stdouts = self.stdout.split('\n')
            minion_issues = []
            for stdout in stdouts:
                try:
                    minion_issues.append(json.loads(stdout))
                except ValueError:
                    logging.info(stdout)
            pass

        self.report_issues(minion_issues)
        self.report_finish()
        else:
            self.report_finish("FAILED")

We subclass “ExternalProcessPlugin“ and use “self.spawn“ to call the Go command-line program. If your program speaks JSON, it is easy to parse the output (and this program is written to mirror the Python code, so the Go program actually outputs the standard issue format that Minion is expecting to use). It would be awesome if other scan tools and Minion can agree on a common report scheme because then writing plugins and exporting to other tools become trivial and possible.

Grow the Ecosystem!

Here is a list of plugins we have developed so far:

  • basic plugins
  • minion-zap-plugin
  • minion-skipfish-plugin
  • minion-setcookie-plugin
  • minion-nmap-plugin
  • minion-ssl-plugin
  • minion-breach-plugin

Also, a community member has been writing a minion-arachni-plugin for Arachni.

But that’s not enough. We want to make security reviews agile by allowing developers to create and to use different plugins to keep their application secured from common vulnerabilities on their own. To achieve this goal, we must build an ecosystem with a great number of useful plugins. If you haven’t heard, Minion IS open source. Fork it on https://github.com/mozilla/minion and help us grow. Minion can become big. Let us know your ideas and talk to us to get you started. Personally, I feel Minion has the potential to grow into something big like OpenStack and Docker.

Anyway, you can reach us via

Although my internship is coming to an end this week, I will continue to contribute on Github like the rest of our Minion developers always do. I want to thank all of you awesome Mozillians and Minion users for the support and guidance over the past 12 weeks. In particular, I want to thank my Web Security Automation team:

  • Stefan Arentz for being Hacker’s Best Mentor and leading Minion development
  • Yvan Boily for providing resources for Minion development
  • Simon Bennetts for mentoring me on improving minion-zap-plugin
  • Mark Goodwin for his initial security review and support

and a big thanks to Stephen Donner (Mozilla Web QA manager) for his commitment to use Minion.

Plug-n-Hack

Simon Bennetts

Plug-n-Hack Overview

Plug-n-Hack (PnH) is a proposed standard from the Mozilla security team for defining how security tools can interact with browsers in a more useful and usable way.

Security researchers commonly use security tools in conjunction with browsers, but until now direct integration has required writing platform and browser specific extensions.

Configuring a browser to work with a security tool can be a non-trivial process, and this can discourage people with less experience from using such tools. This can include application developers and testers, exactly the sort of people we would like to use these tools more!

For example, to configure a browser to use an intercepting proxy that can handle HTTPS traffic, the user must typically:

  • Configure their browser to proxy via the tool
  • Configure the tool to proxy via their corporate proxy
  • Import the tool’s SSL certificate into their browser

If any of these steps are carried out incorrectly then the browser will typically fail to connect to any website – debugging such problems can be frustrating and time-consuming.

Without integration between security tools and browsers, a user must often switch between the tool and their browser several times to perform a simple task, such as intercepting an HTTP(S) request.

PnH allows security tools to declare the functionality that they support which is suitable for invoking directly from the browser.

A browser that supports PnH can then allow the user to invoke such functionality without having to switch to and from the tool.

While some of the PnH capabilities do have a fixed meaning, particularly around proxy configuration, most of the capabilities are completely generic, allowing tools to expose whatever functionality they want.

Implementing the above features in Firefox and the tools that we work on and support gives our team an advantage, however we believe that opening up such capabilities to all browsers and all security tools is much more useful for security researchers and application developers and testers.

As a result we have designed and developed the PnH protocol to be both browser and tool independent. The current protocol and Firefox implementation are released under the Mozilla Public License 2.0 which means it can be incorporated in commercial tools without charge.

Phase 1

PnH phase 1 allows easier integration and defines how security tools can advertise their capabilities to browsers.

To support PnH-1 security tools provide a manifest over HTTP(S) which defines the capabilities that the browser can make use of.

It is up to the tool authors to decide how the URL of the manifest is publicised.

An example manifest (for OWASP ZAP) is:

{
  "toolName":"OWASP ZAP",
  "protocolVersion":"0.2",
  "features":{
    "proxy":{
      "PAC":"http://localhost:8080/proxy.pac",
      "CACert":"http://localhost:8080/OTHER/core/other/rootcert/"
    },
    "commands":{
      "prefix":"zap",
      "manifest":"http://localhost:8080/OTHER/mitm/other/service/"
    }
  }
}

The top level manifest includes optional links to a proxy PAC and a root CA certificate.

It also optionally links to another manifest which describes the commands the browser can invoke.

An example commands manifest (for OWASP ZAP) is: https://code.google.com/p/zap-extensions/source/browse/branches/beta/src/org/zaproxy/zap/extension/plugnhack/resource/service.json

In Firefox the tool commands will be made available via the Developer Toolbar (GCLI) https://developer.mozilla.org/en-US/docs/Tools/GCLI

A example of how the ZAP commands are currently displayed is:

Note that user specified parameters can be specified for commands, which can either be free text, a static pull down list of options or a dynamic list of options obtained from the tool on demand.

So if you select the “zap scan” command then you will be prompted to select a site from the list of sites currently known to ZAP.

PnH does not specify how tool commands should be displayed, so other browsers are free to display them in different ways.

Phase 2

The next phase of PnH is still being planned but is intended to allow browsers to advertise their capabilities to security tools.

This will allow the tools to obtain information directly from the browser, and even use the browser as an extension of the tool.

If you are interested in working on this aspect then please get in touch.

Get involved

While this project has been started by the Mozilla Security Team and has been validated with Firefox and OWASP ZAP, this is an open project and we welcome involvement from anyone, especially people working on other browsers and security tools.

If you would like to add PnH support to a browser or tool, or even get involved in onward PnH development,  then please get in touch and we will give you whatever assistance we can.

Tools supporting PnH

  • OWASP ZAP 2.2.0: via MITM-conf add-on
    Source code available from: zap-extensions
  • Burp Suite: support coming soon

 

Introducing FuzzDB

amuntner

4

FuzzDB is an open source database of attack patterns, predictable resource names,  regex patterns for identifying interesting server responses, and documentation resources. It’s most often used testing the security of web applications but can be useful for many other things. FuzzDB started off as years of my own personal documentation and research notes and gradually evolved into its current form.

This is the first of a series of blog posts about FuzzDB. It discusses:

  • The problem that led to the creation of FuzzDB
  • What kinds of things are in FuzzDB
  • The different ways in which FuzzDB could be used
  • The future of FuzzDB

FuzzDB, is hosted at Google Code: https://code.google.com/p/fuzzdb/

Thinking About Test Cases

A lot of attention has been paid to identifying attackable surface areas, but less to the development of attack pattern libraries. When we dynamically test web applications for security vulnerabilities, how good are the test cases we’re using?

Commercial web scanning tool vendors put significant research effort into this problem, but the product of this research is considered intellectual property and locked up inside the application. As users, in order to learn what kinds of test cases are being generated we would need to painstakingly record and analyze its traffic. At the time I initially released FuzzDB, most open source web fault injection tools had sets of test cases which were woefully incomplete and inadequate. There are too many permutations of symbols and encodings used in web protocols for anyone to reliably and repeatably recall all of them. As for the commercial tools, how complete are their sets of test cases, anyway? It’s not always easy to tell. What were they actually testing for? These tools aren’t just test case lists, they’re lists wrapped in complex sets of rules that determine which test cases to use when  and where. After considering these details, I had some doubts about the effectiveness of the typical application testing process.

My thoughts turned to increasing the speed and accuracy with which I could find certain classes of vulnerabilities during assessments. I began collecting, categorizing, and using lists of attack strings and of common file and directory names. Eventually I organized them into what is now FuzzDB and made it freely available under an Open Source license, the Creative Commons Attribution license.

As with any tool, an individual with malicious intent could potentially use FuzzDB in bad ways. However, I believe that it’s better to provide this information for the security of all. More importantly, if developers and testers have access to a good set of test cases, software will be released that has already passed this list of test cases.

That’s my ultimate goal for FuzzDB: for it to become obsolete as an attack tool because the applications become more secure. When applications and frameworks are inoculated against its patterns through testing and secure coding techniques, bad actors will no longer find the patterns in FuzzDB to be useful.

What’s in FuzzDB?

Predictable Resource Locations - Because there are a small number of popular server OS and infrastructure application packaging systems, resources such as logfiles and administrative directories are typically located in a small number of predictable locations. FuzzDB contains a comprehensive database of these, categorized by OS platform, web server, and  application. The intent is for a tester to use these lists to be able to make educated rather than brute-force guesses, significantly increasing the likelihood of successfully forcible browsing interesting and vulnerable resources. Also, they’re appropriate to be used in creating automated scanners as well as IDS/IPS signatures.

Attack Patterns – The attack pattern test-case sets are categorized by platform, language, and attack type. These are malicious and malformed inputs known to cause information leakage and exploitation. FuzzDB contains comprehensive lists of attack payloads known to cause issues like OS command injection, directory listings, directory traversals, source exposure, file upload bypass, authentication bypass, http header crlf injections, and more.

When I say “malicious inputs,” I mean it. Downloading the project may cause antivirus alerts or trigger pattern-based malicious code sensors. While FuzzDB is itself nothing but a collection of text files that are harmless on their own, some of the patterns included in the files have been used extensively in worms, malware, and other exploits.

Response Analysis - Since system responses also contain predictable strings, FuzzDB contains a set of regex pattern dictionaries such as interesting error messages to aid detection software security defects, lists of common Session ID cookie names, regex for numerous Personally Identifiable Information, and more.

Documentation – Helpful documentation and cheatsheets sourced from around the web that are relevant to the payload categories are provided.

Other useful stuff – Webshells, common password and username lists, and some handy wordlists.

You can browse it’s contents using Google Code’s Source browser.

What can FuzzDB be used for?

  • Web application penetration testing using popular penetration testing tools like OWASP Zap or Burp Suite
  • A standard ZAP Intercepting Proxy add-on
  • Building new automated scanners and automation-assisted manual penetration test tools
  • Testing network services that use something other than HTTP semantics
  • As malicious inputs for testing GUI or command-line software
  • Using the patterns to make your open source or commercially licensed application better
  • Identifying interesting responses to your probes. Here is a screenshot illustrating how this looks in Burp Suite
  • Testing your IDS or IPS by using these test cases to “attack” your web server
  • Testing during a bake-off of web security product vendors
  • Testing a new custom web server or other network service for vulnerability to the patterns that have worked on one or more other platforms in the past
  • Building intrusion identification and response systems
  • Winning app security Capture the Flag competitions
  • As a learning tool for better understanding various different malicious byte combinations which can cause the same vulnerability

If you’re using FuzzDB in a novel way, I’d love to hear about it!

The Future of FuzzDB

There is still a lot of work to be done to improve FuzzDB. My plan for the upcoming year includes:

  • Respond to the outstanding bugs
  • Come up with a consistent naming structure (this is actually one of the bugs)
  • Write more documentation, such as these blog posts
  • Update the Discovery files, they’re still very useful, but a few years old.
  • Improve some of the Attack payload categories
  • Help it work better with OWASP Zap and Minion

In addition, FuzzDB will move into a wiki that will allow discussion of the contents and permit collaboration on new items.
If you’re interested in helping in any of these areas or have suggestions such as a consistent directory and name format for FuzzDB or have more  fuzz files to send, I’d love to hear from you.