Introducing Minion

yboily

4

Minion is a platform developed by the Security Automation team at Mozilla to enable integration and adoption of automated security testing that has been under development for the past year.

The platform allows any team to set up the basic requirements to perform automated scanning and testing of websites and services by providing sensible defaults for plugins that enable scanning of many types of web applications and services.

With the 0.3 release of Minion there are several milestones that have been achieved that have allowed us to start using Minion internally across our development community, quality assurance, and security teams.

Architecture

Minion is intended to be a platform that is simple to use, easy to deploy, simple to extend, and flexible enough to be integrated into any development or operations workflows. At a high level there are three major components in Minion: Plugins, Task Engine, and Front End.

Minon Plugins are light-weight wrappers that perform tasks such as configuring, starting, stopping a plan, and accept a set of callbacks to notify the caller that information is available. In order to be used, Plugins require a plugin runner that handles the invocation of the plugins as well as the results; in addition to supporting Minion’s task engine, the Minion backend repository includes command-line scripts to execute plugins. This provides support for testing during development of new plugins and allow a high degree of flexibility in how plugins are used outside of Minion.

The Task Engine is the core platform; it provides an API for managing and configuring Plans (collections of plugins and configurations), collections of users, sites and services, and the results of executions of Plans against those targets.

The Front End is a web application that provides both administration and usage of Minion; users can perform most of the configuration tasks needed to set up Minion plans, targets and users, as well as review the results of Minion scans. Being a Mozilla project, the front-end uses Persona for authentication, but all access control based decisions are built into Minion itself.

Minion Plugins

At their heart, Minion plugins are automation scripts designed to abstract away the platform, operating system, and features that an individual security tool implements, and provide a single mechanism for configuring the tool, initiating a scan, and collecting the results.

It may be helpful to look at the code for an existing plugin to better understand how they work; the AlivePlugin is a clear, simple example.

The Alive plugin is an extremely basic plugin that confirms that a host is reachable, but it implements all of the required features, and extends a BlockingPlugin. The plugin exposes some member variables that provide user interface cues (the name, links for additional information), and in this case, some built in report objects. In the do_run method the actual logic of the scan is performed, and since there is no detailed setup or stopping functionality is required, the BlockingPlugin starting and stopping functionality is sufficient.

Two base classes for plugins are provided in the Minion backend to get developers started:

  • BlockingPlugin this plugin provide the basic functionality to support a plugin that performs a task, and reports it’s completion state at the end. This is suitable for creating straightforward plugins directly within Python
  • ExternalProcessPlugin this plugin provides the functionality required to kick-off an external tool, and provides the basis for several other extensions, especially those that wrap existing security tools.

In addition to several basic “proof of technology” plugins that collect details about targets and provide best practice information, the Minion development team is currently maintaining three other extensions:

  • OWASP Zed Attack Proxy This plugin wraps the OWASP ZAP platform and enables detailed application scanning
  • Skipfish a simple, but powerful web fuzzer from Google
  • nmap a port scanning tool that is generally accepted as the best in it’s class

Minion Task Engine

The Task Engine provides the core functionality for managing users, groups, sites, scans, and results within the Minion platform. Acting as a central hub, the Task Engine maintains a register of available plugins, provides facilities for creating and modifying plans, and managing user access to Minion, including which sites they can scan.

Plugins

Plugin deployment is one of the only features of Minion that cannot currently be managed from within the Front-End; this is a result of the configuration needed to deploy them, but the Minion Front-End provides the ability to review the available plugins, and get the class details, which is the information required to add a plugin to a Plan.

Plans

A Minion Plan is JSON document that provides some information about what the plan does, and a sequence of tools to invoke. An example can be found below:

 

{
     "name": "Fuzz and Scan",
     "description": "Run Skipfish to fuzz the application, and perform a ZAP scan.",
     "workflow": [
          {
               "plugin_name": "minion.plugins.skipfish.SkipfishPlugin",
               "description": "",
               "configuration": {}
          },
          {
               "plugin_name": "minion.plugins.zap_plugin.ZAPPlugin",
               "description": "Run the ZAP Spider and Scanner",
               "configuration": {
                    "scan": true
               }
          }
     ]
}

In this example, the name and description are intended to be human readable descriptions of what the plan will do, while the workflow array contains a set of plugin names, a description that can will be included in the plan details, and a set of configuration details that may be plugin specific.

Users and Invites

Minion is intended to be a team oriented tool; as a result, the the platform allows user and group management. User accounts are created through an invitation mechanism, or via the administrative interface. The invitation system allows administrators to pre-create groups, sites and plans within Minion, and then add a user to that group before the user has enrolled. Once the invite is issued, an email will be sent to the user and the user can then access a configured profile.

Groups

Groups are the mechanism by which administrators can control how users have visibility into sites and results within in Minion. In order for a user to be able to interact with a site via Minion, that user needs to be added to the group, and the site needs to be associated with that group. This provides extremely fine grained control over visibility into scan results. Currently group membership allows both viewing of scans and the ability to re-execute a scan, but as the project progresses, constraints can be added to allow users to review results, but not initiate scans.

Minion Front-End

Designed to be easy to use and provide instant feedback, the front-end provides access to the Minion platform. Each of the pieces of the functionality described above is accessible via the front-end, and is explicitly enabled by calling the web services exposed by the Task Engine. One of the advantages of the architecture is that the front-end can be easily re-engineered with no impact to the back-end or plugins.

Technologies

Minion is built with Python, Angular.js, and several packages that assist in ensuring a reliable end to end service. These technologies were selected by our development team, but the architecture, and each of the service boundaries are intended to use JSON calls to permit easy integration with other services. Because of the design principles applied, it is entirely possible to implement plugins that run on any operating system or platform, and do not need to reside on the same service. With the appropriate network configurations it is possible to deploy the front-end, task engine, and plugins on different networks, which allows users to isolate the amount of attack surface that needs to be deployed in sensitive networks.

Road Map

There are several features that are under active development, and should be implemented over the next several releases.

Authentication & Access Management

Site Ownership Verification

This is a critical feature that enables users to demonstrate ownership of a site before initiating scans.

Granular Access Control

The ability to govern users ability to scan by group and site ownership as well as role.

Plugin Improvements

Improved Results Reporting

Minion is only as good as it’s plugins. Now that we have a working and reliable core platform, refinement of plugin results, and improving reporting is a core objective.

Deferred Execution Plugins

Sample implementations of invoking third party services so that we can demonstrate integrating with other Security as a Service platform

Reporting Plugins

Currently we have assigned risk ratings to findings based on our best practices, but that is not necessarily reflective of the priority of issues to other teams. We intend to implement a pluggable reporting interface, including the ability to add plugins to modify the risk ratings based on the security posture and priorities of the teams using Minion.

Front End

Landing Pages

Currently Minion is designed for technical users who have a need to see deep technical details. In the future, it may be desirable to generate metrics and dashboards, and to facilitate that Landing page support will be implemented to allow customization for user views.

Task Engine Improvements

Cohort

Minion is designed to support dynamic analysis via web application scanning. This is only one part of the story regarding how to perform automated security testing. Cohort is a branch of Minion that will enable analysis of source code repositories and perform static analysis.

Historical Issues

In order to facilitate ongoing tracking of a security program, support and integration for third party issue trackers (initial targets are Bugzilla and Github), and the ability to compare multiple scans over time will be implemented.

Why Minion?

The Mozilla Security team supports hundreds of websites of services, and products used by hundreds of millions of users. In addition our team supports hundreds of employees and thousands of community members that contribute to Mozilla products and services. Scaling to that level is not feasible without improving automation capabilities. While it would be much easier to solve this problem for ourselves, Mozilla’s mission is to support the open web, and protect our users. By building Minion as a foundation for a security as a service platform, integrating open source and free tools, then releasing it as open source, we aim to contribute a platform that can be used by any team to dramatically improve their coverage, and integrate security testing automation in all parts of their IT operations and software development processes.

Minion is an open source project, and we welcome contributors, users, and feedback!

Finally, I would like to extend a huge thanks to Stefan Arentz, Simon Bennetts, Yeuk Hon Wong, Matthew Fuller, and all of the other developers who have moved Minion from a sheet of paper and a set of shell scripts to a production service!

OCSP Stapling in Firefox

dkeeler

5

OCSP Stapling has landed in the latest Nightly builds of Firefox! OCSP stapling is a mechanism by which a site can convey certificate revocation information to visitors in a privacy-preserving, scalable manner.
Revocation information is important because at any time after a certificate has been issued, it may no longer be appropriate to trust it. For instance, maybe the CA that issued the certificate realizes it put incorrect information on it. Maybe the website operators lose control of their private key, or it gets stolen. More benignly, maybe the domain was transferred to a new owner.
The Online Certificate Status Protocol (OCSP) is one method for obtaining certificate revocation information. When presented with a certificate, the browser asks the issuing CA if there are any problems with it. If the certificate is fine, the CA can respond with a signed assertion that the certificate is still valid. If it has been revoked, however, the CA can say so by the same mechanism.
OCSP prevents an attack
OCSP has a few drawbacks. First, it slows down new HTTPS connections. When the browser encounters a new certificate, it has to make an additional request to a server operated by the CA. Second, it leaks to the CA what HTTPS sites the user visits, which is concerning from a privacy perspective. Additionally, if the browser cannot connect to the CA, it must choose between two undesirable options. It can terminate the connection on the assumption that something is wrong, which decreases usability. Or, it can continue the connection, which defeats the purpose of doing this kind of revocation checking. By default, Firefox currently continues the connection. The about:config option security.OCSP.require can be set to true to have Firefox terminate the connection instead.
OCSP stapling solves these problems by having the site itself periodically ask the CA for a signed assertion of status and sending that statement in the handshake at the beginning of new HTTPS connections. The browser takes that signed, stapled response, verifies it, and uses it to determine if the site’s certificate is still trustworthy. If not, it knows that something is wrong and it must terminate the connection. Otherwise, the certificate is fine and the user can connect to the site.
site asks CA for certificate status
OCSP stapling
If Firefox requests but does not receive a stapled response, it falls back to normal OCSP fetching. This means that while OCSP stapling protects against mistakes and many basic attacks, it does not prevent attacks involving more complete network control. For instance, if an attacker with a stolen certificate were able to block connections to the CA OCSP responder while running their own server that doesn’t do OCSP stapling, the user would not be alerted that the certificate had been revoked. A new proposal currently referred to as “OCSP-must-staple” is intended to handle this case by giving sites a way of saying “any connection to this site must include a stapled OCSP response”. This is still in development.
OCSP stapling works with all CAs that support OCSP. OCSP stapling has been implemented in popular web servers including nginx and Apache. If you run a website, consider turning on OCSP stapling to protect your users. If you use Firefox Nightly, enjoy the increased security, privacy, and performance benefits!

How to speed up OWASP ZAP scans

Simon Bennetts

1

So you’ve used OWASP ZAP to scan your web application, and its taking far too long :(

Is that it, do you have to lump it or leave it?

There are actually many things you can do, but the first thing you have to do is work out why its taking a long time.

How Scanners work

It helps to understand how scanners like ZAP work.

Typically they explore the application using a spider (also known as a crawler). This identifies all of the URLs that make up the application, all of the forms and all of the parameters.

They then usually attack every parameter on every page.

The time a scan takes is therefore based on:

[Number pages] x [number parameters] x [number attacks] x [how long a request takes] / [number of threads]

There will be a practical limit to the number of threads that will actually be useful – you will always be limited by the network and the amount of processing power on both the target application and the attacking machine (especially if they are the same!).

So if you have a very large application with lots of pages and parameters running on a relatively slow machine then with a default configuration any scanner will take a long time to complete!

However most scanners are very configurable, so even if you do have a massive application there are lots of approaches you can use.

When investigating performance issues with ZAP I recommend running it with the UI even if you want to run it in headless mode in the end – it will allow you to see whats going on much more effectively.

How to identify the bottlenecks

The most important thing is to identify the underlying causes, and there are many possibilities, any or all of which could be the culprits:

  • Target application/machine overloaded
  • Attacking machine overloaded
  • Network overloaded
  • Firewall throttling connections
  • Spider looping
  • Too many pages + params + tests
  • Inappropriate tests
  • Badly written tests
  • Unnecessary / duplicated tests

Hardware and networks

It is worth identifying hardware and network related issues first.

Have a look at the CPU usage on both the target and attacking (the one with your scanner) machines – are either of them excessively high? If either machine is underpowered or with low memory then you may need to look at using more powerful machines.

Check the target application logs – ZAP has a tendency to overwhelm applications that are not designed with high performance in mind ;)

Crucially you should look at the number of requests that ZAP is making.

Both the Spider and Active Scanner dynamically report the URLs that they have accessed. The Spider shows a count of URIs it has found on its toolbar – you can expect this to rise quickly at the start and then tail off as the Spider progresses:

zap-perf-spider-sm

(Click on any of the screenshots to see larger versions)

The Active Scanner has a “Scan Progress Detail” popup accessible from its toolbar that shows the time each rule has taken, the total number of requests and the time each request took:

zap-perf-ascan1b

How fast requests can be made will depend on many factors, but if each request is taking over a second then you are likely to have a hardware or network problem that is outside of the scope of this blog post!

If requests are taking an excessively long time then check to see if there is something on your network that might be throttling the connection, having identified ZAP as a potentially malicious tool :)

The spider

If the spider never completes then have a look at the requests it is making. If it appears to be making very similar requests then it might have got stuck in a loop.

This shouldnt happen – there is code to prevent that – but if it does then you should report the problem and in the meantime you can use regex excludes to prevent the spider accessing the links that cause it problems.

The scanner rules

Have a look at the “Scan Progress Detail” popup after the scan has completed – this will show you which rules were run and how long they each took.

If one rule is taking significantly longer than the other then there may be a problem with it – report it and we’ll look into it. This is more likely with the alpha and beta scan rules than the release quality ones.

Also have a good look at which rules are being run – if you know your application is definitely not using an SQL database then there is no point running those rules. You can configure which rules are run via the Policy dialog which is also linked off the Active Scan toolbar:

zap-perf-ascan2b

ZAP configuration

There are also various spider and active scanner options which you should double check – the defaults are good for most cases but may have been changed or may not be suitable for your environment. These are accessible via the top level “Tools/Options…” menu or from the relevant toolbar:

zap-perf-ascan3b

Check that they are set to sensible values – click on the blue ‘?’ help icon in the top right hand corner as this gives much more information about the parameters.

Be especially aware of the active scanner “Delay when scanning in milliseconds” – this should usually be set to zero, particularly if the scan is taking too long.

The “Attack Strength” is also important – this is roughly the number of requests you can expect each rule to make on every parameter on every page. All rules are unique and some only ever use a very small number of requests, but in general assume:

  • Low  – to be up to 6 requests
  • Medium – to be up to 12 requests
  • High- to be up to 24 requests
  • Insane- to be over 24 requests, potentially hundreds

The default is Medium – you should not go higher than this if you are having performance problems. In a future release we are planning on allowing the Attack Strength to be configured on a per rule basis.

Also be aware that while the the “Handle anti CSRF tokens” option is very useful if your application uses anti CSRF tokens, it can significantly impact performance as it forces the scanner to run single threaded.

The application structure

The final recommendation can potentially have the biggest effect – it’s always worth saving the best until last :)

Have a look at the structure of your application in the Sites tree – are there a very large number of nodes anywhere in the application?

I have been working with the Mozilla QA team to get ZAP security tests included in their Selenium service.

One particular site was taking so long that they thought ZAP had hung – it hadnt, but in the end took 13 hours to complete the scan!

When I looked at the Sites tree I found that one node had many thousands of children. It turned out that this part of the application was data driven, and there were a very large number of records which all ‘generated’ multiple pages. So ZAP was attacking the same code in the same way thousands of times, which was pointless – the important thing is to attack the code, not to worry about all of the data held in the db.

We fixed this by adding a “Exclude from scanner” rule. We could have excluded the whole subtree, but we wanted to scan the code at least once, so we came up with a regex expression similar to:

".*/bigsubtree/(?!justincthispart).*"

which excluded everything apart from a relatively small subset of the child nodes, ie the “justincthispart” subtree under “bigsubtree”.

This had a dramatic effect – the spider and active scanner now complete in 40 minutes!

Online help

And if none of that helps then get in touch!

ZAP user group: https://groups.google.com/group/zaproxy-users

ZAP developer group:https://groups.google.com/group/zaproxy-develop

IRC: #websectools on irc.mozilla.org (https://irc.lc/mozilla/websectools/zapuser???)

Simon Bennetts (Mozilla Security Team and ZAP Project Lead)

Mixed Content Blocker hits Firefox Beta!

Tanvi

4

The Mixed Content Blocker we described last month is now available in Firefox Beta and is on track for a general release in August with Firefox 23. When secure HTTPS pages load additional content insecurely over HTTP (a.k.a. Mixed Content), users are vulnerable to man-in-the-middle and eavesdropping attacks. The Mixed Content Blocker will block insecure active content by default, protecting our users from these attacks.

Call to Users – Report problems
If you find a website that isn’t functioning correctly because it contains insecure content that is being blocked by the Mixed Content Blocker, please let us know by sending an email to security@mozilla.org or commenting in our compatibility tracking bug

How can you tell if a site has Mixed Content that Firefox has blocked? Look for this Shield Icon in the location bar.

Image: A small shield icon is shown before the web page address in the location bar when Firefox has blocked Mixed Active Content.

If you’d like to contribute further and help us find compatibility issues you can participate in our QA test day on Monday, July 1st.

Call to Web Developers – Test your site with Firefox Beta
If you rely on HTTP resources in your HTTPS pages this feature might break your website. If you do find Mixed Content issues on your webpage in Firefox 23+, chances are that the same issues exist in Chrome and/or Internet Explorer, who have also implemented this feature.

The best way to tell if your site will load correctly in Firefox 23 is to download the latest Firefox Beta and browse through your website with the Web Console open. Enable the “Security” messages in Web Console and check for messages about Mixed Content.

Image: The Web Console lists the Mixed Display Content that's loaded and the Mixed Active Content that's blocked.

If you want to test your site in a more automated fashion, you can try using Skipfish, a web application security tool. Skipfish has a -M option that will report mixed content issues on your webpage.

To fix your site, simply replace http:// links with their https:// equivalents on your SSL pages. You can also use protocol-relative links if you use the same source code to serve your HTTP and HTTPS website.

If the Mixed Content resources on your page come from a third party, there is a chance that the HTTPS equivalent version already exists. For example, youtube.com has both HTTP and HTTPS video embed options. If the HTTPS version does not exist, consider contacting the third party (especially if they are one of your partners) and ask them to provide an HTTPS version of the content.

Call to Contributors – Contact Sites
We’ve been working on site compatibility issues, trying to find websites that are affected by the Mixed Content Blocker and alert them before Firefox 23 is released in August. However, finding accurate contact information for the affected sites has been a difficult task. And we could really use some help ;)

If you would like to contribute, please take a look at the list of affected sites and see if you can contact their website administrators and inform them of the Mixed Content compatibility issues that they are about to run into with Firefox 23 (and likely already have with Chrome or Internet Explorer). If you are able to find contact information and/or alert the website please let us know in the associated bug.

You can also help find more affected sites by participating in our QA test day on Monday, July 1st.

Want to Learn More?
Check out a more detailed blog post on this feature here.

Responding to Claims of Compromise

mcoates

2

Issue
A hacking group called “AnonGhost” is claiming they have compromised “Mozilla Emails Managers” and exposed the email address and a 16-character value for 50 accounts. Upon investigation we’ve determined the 16-character values are not user passwords. Instead, they are activation codes used for the initial activation of user accounts for a Mozilla blogging software.

Impact
The claim relates to 50 Mozilla employees, former Mozilla employees and other people in the Mozilla community. The activation code can not be used to directly access any systems. In all situations a username and password are required to access the blogging software. We have no indications that the passwords were at risk.

Status
At this time we are still performing additional investigations to understand how the activation codes were exposed. We’ll make sure to address any concerns that are uncovered.

Michael Coates
Director of Security Assurance

Web Developer Security 1.0

Tanvi

4

Raymond Forbes and I will be presenting Web Developer Security 1.0 on Tuesday, June 18th at 12:15 pm PDT. The training will be held in Mozilla’s Mountain View office and also broadcast online.

We will cover a grab bag of proactive security measures Web Developers can take to protect their users and their site. Rather than focusing on how to attack a website, this training focuses on how you can safeguard your website from common threats. Some of the topics we will cover include Content Security Policy, X-Frame-Options, cookie security flags, iframe sandbox, content sanitization, and sensitive data encryption. Deploying these techniques will help protect your users and improve the security of your site.

For those of you who are able to come watch the talk in person, there will be Punch & Pie!

https://air.mozilla.org/web-security-training/

Content Security Policy 1.0 Lands In Firefox

imelven

1

Content Security Policy (usually abbreviated as CSP) is a way for web pages to restrict the sites allowed to include content within the page. It also can restrict whether inline scripts are allowed to run and inline styles/CSS are allowed to be applied to the page. In general, CSP allows web developers greater control over their content, helping mitigate several security problems. One major benefit of CSP is that, by default, it prevents inline scripts from executing. This greatly helps mitigate the threat of XSS (Cross Site Scripting) or other forms of script injection. For a great introduction to CSP, see Mike West’s post “An Introduction to Content Security Policy”.

The idea of a document being able to specify content restrictions dates back to at least 2007, when it was discussed by both Gervase Markham of the Mozilla Project, and Robert ‘rsnake’ Hansen, a security researcher. Brandon Sterne and Sid Stamm worked on an initial prototype implementation of CSP for Firefox long before an official specification existed. This ‘pre-spec’ implementation of CSP landed in Firefox 4.0 in March 2011, and used the X-Content-Security-Policy header. The concept of CSP gained traction fairly rapidly, with Chrome shipping their first implementation, using the X-Webkit-CSP header, in August 2011. After much discussion among security and web experts, in November 2011 a working draft of a W3C specification for Content Security Policy 1.0 was published. The syntax specified by the working draft was quite different from the syntax used by the initial Firefox implementation, as concepts had over time evolved and been refined. A year later, the CSP 1.0 spec reached the Candidate Recommendation stage, where it was ready to be implemented. Chrome shipped support for the CSP 1.0 spec using the unprefixed header in Chrome 25 last February. Internet Explorer 10 added support for CSP’s ‘sandbox’ directive in Internet Explorer 10, but it does not support the rest of the CSP directives currently.

What changed between the original Firefox CSP implementation and the CSP 1.0 spec ?

  1. The Header has Been Unprefixed
  2. Instead of X-Content-Security-Policy, the spec defines the Content-Security-Policy header. This is great, because we no longer have the situation where a site has to send multiple CSP headers (with different syntax !) to have its policy enforced in CSP-supporting browsers. The same Content-Security-Policy header will work for Firefox, Chrome, IE 10 (sandbox only) and any other browsers that implement the spec. If for some reason a site sends both the X-Content-Security-Policy header and the Content-Security-Policy header, the prefixed header will be ignored and only the policy from the unprefixed header will be applied.

  3. Changes to the Available Directives
  4. The directives available within a policy changed somewhat. The original Firefox CSP implementation used the ‘allow’ directive to specify the default policy that will be used for unspecified directives. This has been replaced by the ‘default-src’ directive in CSP 1.0. Additionally, Firefox’s original implementation used the ‘xhr-src’ directive to restrict the origins to which an XMLHttpRequest object can connect. In the 1.0 spec, ‘xhr-src’ was replaced by ‘connect-src’ – which, in addition to XHR, also restricts where EventSource and WebSocket objects can connect.

  5. Changes to Default Behavior
  6. The initial Firefox implementation of Content Security Policy failed closed, meaning that future syntax wasn’t backwards compatible. In CSP 1.0, a missing default-src directive falls back to allowing all sources.

  7. Changes to Allowing Inline Script and the Use of eval()
  8. The method for opting into allowing inline script and the use of eval() changed. In the original Firefox CSP implementation, the ‘options’ directive was used with values inline-script and eval-script to do this. For example, an original CSP policy of “allow ‘self’ ; options inline-script eval-script” would allow content to be loaded from the same origin as the CSP-protected document and also allows inline scripts to execute and eval() to be used.
    In CSP 1.0, additional keywords have been added to the ‘script-src’ directive to handle this situation. ‘script-src: unsafe-inline’ opts into allowing inline script and ‘unsafe-eval’ opts into allowing the use of eval(). Both keywords can be specified to opt into doing both, although this decreases the value of using CSP quite a bit ! For example, a CSP 1.0 policy of ‘default-src ‘self’ ; script-src ‘unsafe-inline’ ‘unsafe-eval’ allows content to be loaded from the same origin as the CSP-protected document and also allows inline scripts to execute and eval() to be used.

  9. Blocking Inline Styles
  10. Firefox’s original CSP implementation did not block inline styles at all. This was a later addition to the CSP spec. It aims to prevent attacks via injecting <style>
    elements or another HTML element with a style attribute. These attacks can be carried out even when executing script is not allowed. Some potential attacks include using CSS selectors to exfiltrate data from the page and using attributes to overlay one element on top of another, leading to a possible phishing attack.

Are there still some differences between the Firefox CSP 1.0 implementation and the spec ?

Yes, some fairly minor ones.

  • The frame-ancestors directive is still supported. This directive is similar to the X-Frame-Options header, restricting which sites are allowed to frame the webpage. The X-Frame-Options header has been deprecated and also had some issues as originally specified. It’s been proposed that this functionality be rolled into CSP, via the frame-options directive. frame-options is a new CSP directive proposed as part of the “User Interface Security Directives for Content Security Policy” spec under development. At some point, Firefox’s frame-ancestors directive will likely be deprecated in favor of frame-options.
  • The report-uri directive allows a policy to specify where CSP violation reports are sent. In Firefox, this is limited to sending reports to the origin of the document which specified the CSP. There’s ongoing discussion on the correct restrictions and concerns around reports, both within the W3C WebAppSec working group and Mozilla, please see Bug 843311
  • The SMIL animation elements and are blocked when inline styles are blocked. The biggest driver for this is Bug 704482 – a very clever attack from Mario Heiderich which allows reading key strokes even when script is not allowed to execute. We are taking the cautious approach here and plan to bring this up within the W3C WebAppSec working group.
  • Firefox does not support the ‘sandbox’ directive. This directive is optional in the CSP 1.0 spec, but is planned to be part of CSP 1.1.

What does the future hold for CSP in Firefox ?

We plan to deprecate the X-Content-Security-Policy header at some point. Part of the motivation for this post is to let people know it’s time to transition their sites to using the unprefixed Content-Security-Policy header. Firefox displays a message in the web console informing web developers the prefixed header will be deprecated in the future.

Additionally, we’re participating in the development of the CSP 1.1 spec via the W3C WebAppSec working group and Mozilla’s Dan Veditz is one of the editors of this spec. We’re especially excited about the new nonce-source source for the script-src and style-src directives. This source allows the whitelisting of specific inline scripts and styles if they provide the same valid nonce specified in the policy. nonce-source was recently implemented in Blink and is under development in Firefox. See section 4.10.1 “Usage” in the CSP 1.1 spec for details and please note that this spec is still rapidly evolving !

Within Mozilla we also have discussed taking the inline style blocking of CSP further. In particular, there’s a desire to add similar functionality to blocking eval() – the rationale is that constructing styles from strings is also inherently dangerous. Bug 873302 covers this and some of the discussion around this is contained within the original ‘block inline styles’ bug . We plan to bring this up within the W3C WebAppSec working group in the very near future to collect more feedback on this idea. It’s also been proposed within Mozilla (and discussed somewhat within the working group) to create a CSP directive that would block the usage of .innerHTML. While injecting script elements via innerHTML would be blocked by a CSP that blocks inline scripts, there are many other problems that can result from using untrusted input in .innerHTML.

That was too long, just tell me where can I use the Content-Security-Policy header now ?

  • Firefox : now in desktop Firefox 23 (Aurora) and later. Firefox for Android and Firefox OS soon to follow.
  • Chrome : 25 and later
  • Internet Explorer : 10 and later (sandbox directive only)

Thanks !

  • The original lead for developing CSP at Mozilla: Brandon Sterne
  • Mozilla Security Engineering & Security Assurance, especially Sid Stamm, Brian Smith, Daniel Veditz, Garrett Robinson, Mark Goodwin, Frederik Braun, and Tanvi Vyas
  • Fellow CSP enthusiasts: Mike West, Adam Barth, Brad Hill, Devdatta Akhawe, Neil Matatall, Joel Weinberger, Kailas Patel

Mixed Content Blocking in Firefox Aurora

Tanvi

5

Firefox 23 moved from Nightly to Aurora this week, bundled with a new browser security feature. The Mixed Content Blocker is enabled by default in Firefox 23 and protects our users from man-in-the-middle attacks and eavesdroppers on HTTPS pages.

When an HTTPS page contains HTTP resources, the HTTP resources are called Mixed Content. With the latest Aurora, Firefox will block certain types of Mixed Content by default, providing a per-page option for users to “Disable Protection” and override the blocking.

What types of Mixed Content are blocked by default and what types are not? The browser security community has divided mixed content into two categories: Mixed Active Content (like scripts) and Mixed Passive Content (like images). Mixed Active Content is considered more dangerous than Mixed Passive Content because the former can alter the behavior of an HTTPS page and potentially steal sensitive data from users. Firefox 23+ will block Mixed Active Content by default, but allows Mixed Passive Content on HTTPS pages. For more information on the differences between Mixed Active and Mixed Passive Content, see here.

Mixed Content Blocker UI
Designing UI for security is always tricky. How do you inform the user about a potential security threat without annoying them and interrupting their task?

Larissa Co (@lyco1) from Mozilla’s User Experience team aimed to solve this problem. She created a Security UX Framework with a set of core principles that drove the UX design for the Mixed Content Blocker.

When a user visits an HTTPS page with blocked Mixed Active Content, they will see a shield icon in the location bar:

Shield Icon Doorhanger shown on HTTPS page with Mixed Active Content

Clicking on the shield, the user will see options to “Learn More”, “Keep Blocking”, or “Disable Protection on This Page”:

Shield Doorhanger Drop Down UI

If a user decides to “Keep Blocking”, the notification in the location bar will disappear:

If the user decides to Keep Blocking, the shield will disappear.

On the other hand, if a user decides to “Disable Protection on This Page”, all mixed content will load and the lock icon will be replaced with a yellow warning sign:

Yellow Warning Triangle appears after the user Disables Protection

When a user visits an HTTPS page with Mixed Passive Content, Firefox will not block the passive content by default. But since the page is not fully encrypted, the user will not see the lock icon in the location bar:
A page with Mixed Passive Content will show the Globe icon instead of the Lock icon.

Compatibility
We have a master tracking bug for websites that break when Mixed Active Content is blocked in Firefox 23+. In addition to websites that our users have been reporting to us, we are running automated tests on the Top Alexa websites looking for pages with Mixed Active Content. If you run into a compatibility issue with a website involving mixed content, please let us know in the master bug, or take a step further and contact the website to let them know. Chances are, their website is also broken on Chrome and/or Internet Explorer. Chrome and Internet Explorer also have Mixed Content Blockers, but their definitions of Mixed Active and Mixed Passive Content differ from slightly from Firefox’s definition.

Want to learn more?
Still curious and want to learn more details about the Mixed Content Blocker in Firefox? Check out this more detailed blog post or feel free to ask us questions on mozilla.dev.security.

Orangfuzz – an experimental user interaction fuzzer for Firefox OS

Gary

One of the goals of the fuzzing team is to identify security vulnerabilities within our products using various techniques. As we continue working with Firefox OS, we need to build and adapt the proper tools to enable fuzz testing on the mobile device.

Orangfuzz is an experimental user interaction fuzzer. It builds on generate-orangutan-script.py and uses the Orangutan framework. Orangutan injects events directly into the low-level kernel device file that represents an Android device’s touch screen. It supports actions such as “tapping” and “dragging”, simulated from a user’s perspective. The fuzzer generates an Orangutan script containing random sets of these actions.

This concept was inspired by bug 838215, which was a crash involving the handling of touch events.

Orangfuzz currently only supports the B2G Test Driver device, but adding additional support for other devices, if Orangutan supports them, is straightforward. We define the device through its specifications (e.g. home key location, screen resolution). Adding support for additional devices is as simple as adding new subclasses which provide the appropriate resolution and screensizes. It may be possible to run this against the B2G emulators but this has not been tested.

Warning: It is entirely possible to generate a script that contains a set of actions that dial emergency numbers such as “911″, “112″ or “999″, so it is recommended to run the script against a special build of Gaia (not yet well-tested) with dialing and messaging capabilities disabled if one wants to run orangfuzz continuously without supervision.

How can you help?

At this point we are still experimenting with the most effective strategy for identifying and triaging crashes, but please feel free to file bugs or ideas moving forward either on GitHub or in Bugzilla. Do subscribe to the mozilla.dev.b2g newsgroup if one is interested.

Bug 858174 tracks moving orangfuzz to production.

A demonstration video on YouTube with annotations is available, or you can get the .webm version (no audio).

-Gary Kwong

* Credits go out to Gregor Wagner, who wrote generate-orangutan-script.py, and William Lachance, author of the Orangutan framework.

We’re doing a Reddit AMA!

Curtisk

Members of the Mozilla Security community will be participating in an “Ask Me Anything (AMA)” even on Reddit tomorrow, 27-March-2013. We anticipate to run this for 24 hours from March 27th at 6:00 am PDT through March 28th at 6:00 am PDT.

Within Mozilla our teams depend heavily on our community handle everything involved in Information Security research &  development; if you would like to learn more please come out and ask us the questions you want to know the answer to!

You an also follow us on twitter at https://twitter.com/mozsec

This post will be updated with the appropriate links tomorrow morning.

Update:

Link to AMA: http://www.reddit.com/r/netsec/comments/1b3vcx/we_are_the_mozilla_security_community_ask_us/