November 2017 CA Communication

Mozilla has sent a CA Communication to inform Certificate Authorities (CAs) who have root certificates included in Mozilla’s program about Mozilla’s expectations regarding version 2.5 of Mozilla’s Root Store Policy, annual CA updates, and actions the CAs need to take. This CA Communication has been emailed to the Primary Point of Contact (POC) and an email alias for each CA in Mozilla’s program, and they have been asked to respond to the following 8 action items:

  1. Review version 2.5 of Mozilla’s Root Store Policy, and update the CA’s CP/CPS documents as needed to become fully compliant.
  2. Confirm understanding that non-technically-constrained intermediate certificates must be disclosed in the Common CA Database (CCADB) within one week of creation, and of new requirements for technical constraints on intermediate certificates issuing S/MIME certificates.
  3. Confirm understanding that annual updates (audits, CP, CPS, test websites) are to be provided via Audit Cases in the CCADB.
  4. Confirm understanding that audit statements that are not in English and do not contain all of the required information will be rejected by Mozilla, and may result in the CA’s root certificate(s) being removed from our program.
  5. Perform a BR Self Assessment and send it to Mozilla. This self assessment must cover the CA Hierarchies (and all of the corresponding CP/CPS documents) that chain up to their CA’s root certificates that are included in Mozilla’s root store and enabled for server authentication (Websites trust bit).
  6. Provide a tested email address for the CA’s Problem Reporting Mechanism.
  7. Follow new developments and effective dates for Certification Authority Authorization (CAA)
  8. Check issuance of certs to .tg domains between October 25 and November 11, 2017.

The full action items can be read here. Responses to the survey will be automatically and immediately published by the CCADB.

With this CA Communication, we re-iterate that participation in Mozilla’s CA Certificate Program is at our sole discretion, and we will take whatever steps are necessary to keep our users safe. Nevertheless, we believe that the best approach to safeguard that security is to work with CAs as partners, to foster open and frank communication, and to be diligent in looking for ways to improve.

Mozilla Security Team

Statement on DigiCert’s Proposed Purchase of Symantec’s CA

Mozilla’s Root Store Program has taken the position that trust is not automatically transferable between organizations. This is specifically stated in section 8 of our Root Store Policy v2.5, which details how Mozilla handles transfers of root certificates between organizations. Mozilla has taken an interest in such transfers, and there is the potential for trust adjustments based on the particular circumstances.

The CA DigiCert has announced that it is in negotiations to acquire the CA business of Symantec. This announcement was made following the decision of Mozilla and other root store programs to phase out trust in Symantec’s root certificates, based on a detailed investigation of their old and large CA hierarchies and their behaviour and practices over the past few years. There are no plans to change this phase-out of trust in the roots owned by Symantec.

While Mozilla does not intend to micro-manage any CA, the final arrangements for management and processes and infrastructure to be used by the combined company is of interest and potential concern to us. It would not be appropriate for a CA to escape root program sanction by restructuring, or by purchasing another CA through M&A and continuing operations under that CA’s name, essentially unchanged. And examination of historical corporate merger and acquisition activity, including deals involving Symantec, show that it’s possible for an M&A billed as the “purchase of B by A” to end up with name A and yet be mostly managed by the executives of B.

Representatives of DigiCert have sought guidance from us on the type of arrangements which would and would not cause us concern. In a good faith effort to answer that enquiry, we can make the following, non-exhaustive statements of what would cause Mozilla concern.

  • We would be concerned if the combined company continued to operate significant pieces of Symantec’s old infrastructure as part of their day-to-day issuance of publicly-trusted certificates.
  • We would be concerned if Symantec validation and operations personnel continued their roles without retraining in DigiCert methods and culture.
  • We would be concerned if Symantec processes appeared to displace DigiCert processes.
  • We would be concerned if the management of the combined company, particularly that part of it providing technical and policy direction and oversight of the PKI, were to appear as if Symantec were the controlling CA organization in the merger.

We hope that this provides useful guidance about our concerns, and note that our final opinion of the trustworthiness of the resulting entity will depend on the facts and behavior of the resulting organization. Mozilla reserves the right to include or exclude organizations or root certificates from our root store at our sole discretion. However, if the M&A activity  moves forward, we hope that the list above  will be helpful to DigiCert in planning for a future harmonious working relationship with the Mozilla Root Program.

Gervase Markham
Kathleen Wilson

MWoS: Improving ssh_scan Scalability and Feature Set

Editors Note: This is a guest post by Ashish Gaurav, Harsh Vardhan, and Rishabh Saxena

Maintaining a large number of servers and keeping them secure is a tough job! System administrators rely on tools like Puppet and Ansible to manage system configurations.  However, they often lack the means of independently testing these systems to ensure expectations match reality.

ssh_scan was created in an effort to provide a “simple to configure and use” tool that fills this gap for system administrators and security professionals seeking to validate their ssh configurations against a predefined policy. It aims to provide control over what policies and configurations you self-identify as important.

As CS undergraduates, we had the opportunity to participate in the 2016-2017 edition of Mozilla Winter of Security (MWoS), where we volunteered to improve the scalability and feature set of ssh_scan.

The goal of the project was to improve the existing scanner to make securing your ssh servers easier. It scans ssh servers by initiating a remote unauthenticated connection, enumerates all the attributes about the service and compares them against a user defined policy. For providing a sane baseline policy recommendation for SSH configuration parameters the Mozilla OpenSSH Security Guide was used.

Early Work

Before we started working on the project, ssh_scan was a simple command-line tool. It had limited fingerprinting support and no logging capability. We started with introducing some key features to improve the CLI tool, like adding logging, making it multi-threaded, and extending its dev-ops usability.  However, we really wanted to make the tool more accessible for everyone, so we decided to evolve ssh_scan into a web API.  As soon as the initial CLI tool was leveled up, we moved on to architecture planning for the web API.


Since ssh_scan is written in Ruby, we looked at different Ruby web frameworks to implement the web API.  We finally settled on Sinatra, as it was a lightweight framework which gave us the power and flexibility to adapt and evolve quickly.

We started with providing a REST API around the existing command-line tool so that it could be integrated into the Mozilla Observatory as another module.  Because the Observatory receives a large number of scan requests per day, we had to make our API scale enough to keep pace with that high demand if it was ever to be enabled by default.

High-level Design Overview

Our high-level design evolved around a producer/consumer model. We also tried to keep things simple and modular so that it was easy to trade out or upgrade components as needed, using HTTPS as a transport where-ever possible. This flexibility was invaluable as we progressed throughout the project, learned where the bottlenecks were, and upgraded individual sub-components when they showed strain.

In our approach, a user makes a request to the API which is queued in the database as a state machine to track a scans progress throughout. The worker then polls the API for work, takes the work off the database queue, performs the scan and sends the scan results back to API server. The scan results are then stored in the database. As a starting point, an ssh_scan_api operator can have a single worker process running on the API/DB server.  As the workload requirements increases and queues build up, which we can monitor through our stats API route, we simply scale workers horizontally with Docker to pick up the additional load with relative ease without the need to disrupt other system components.


Asynchronous job management was a totally new concept for us before we started this this project.  Because of this, it took us some time to settle on the components to efficiently handle our use-cases.  Fortunately, with the help of our mentors, we settled on implementing many things from scratch to start, which gave us a more detailed insight on the following:

  • How asynchronous API systems work
  • How to make it scale by identifying and removing the bottlenecks

As the end-to-end scan time depends mainly in completing the scan, we have achieved scalability with the help of multiple workers doing the scans in parallel.  Also to avoid API abuse, we provided authentication requirements around the API to prevent the abuse of essential functions.

Current Status of Project

We have already integrated ssh_scan_api as a supporting sub-module of the Mozilla Observatory and it is deployed as a beta here.  However, even as a beta service, we’ve already run over 4,000+ free scans for public SSH services, which is far more than we could have ever done with the single-threaded command-line version we started with.  We also expect usage to increase significantly as we raise awareness of this free tool.

Future Plans

We plan to do more performance testing of the API to continue to identify and plan for future scaling needs as demand presents itself. Outcomes of this effort might also include an even more robust work management strategy, as well as performance stressing the API.  The process continues to be iterative and we are solving challenges one step at a time.

Thanks Mozilla!

This project was a great opportunity to help Mozilla in building a more secure and open web and we believe we’ve done that. We’d like to give special thanks to claudijd, pwnbus and kang who supported us as mentors and helped guide us through the project.  Also, a very special thanks to April for doing all the front-end web development to add this as a submodule in the Observatory and helping make this real.

If you would like to contribute to ssh_scan or ssh_scan_api in any way, please reach out to us using GitHub issues on the following respective projects as we’d love your help:


Ashish Gaurav, Harsh Vardhan, and Rishabh Saxena

Treating data URLs as unique origins for Firefox 57

The data URL scheme provides a mechanism which allows web developers to inline small files directly in an HTML (or also CSS) document. The main benefit of data URLs is that they speed up page load time because the inlining of otherwise external resources reduces the number of HTTP requests a browser has to perform to load data.

Unfortunately, criminals also utilize data URLs to craft attack pages in an attempt to gather usernames, passwords and other confidential information from innocent users. Data URLs are particularly attractive to attackers because they allow them to mount attacks without requiring them to actually host a full website. Instead, scammers embed the entire attack code within the data URL, which previously inherited the security context of the embedding element. In turn, this inheritance model opened the door for Cross-Site-Scripting (XSS) attacks.

Rather than inheriting the origin of the settings object responsible for the navigation, data URLs will be treated as unique origins for Firefox 57. In other words, data URLs loaded inside an iframe are not same-origin with their parent document anymore.

Let’s consider the following example:

In Firefox version 56 and older, the script within the data URL iframe on line 13 was able to access objects from the embedding context because data URLs inherited the security context and hence were considered to be same-origin. In the specific example, the script within the data URL iframe was able to call the function foo() on line 8 which was defined by the including context and hence should be treated as a different security context.

Starting with Firefox 57, data URLs loaded inside an iframe will be considered cross-origin. Not only will that behavior mitigate the risk of XSS, it will also make Firefox standards compliant and consistent with the behavior of other browsers. In Firefox 57, an attempt to reach content from a different origin (like the one from line 13) will be blocked and the following message will be logged to the console:

Note that data URLs that do not end up creating a scripting environment, such as those found in img elements, will still be considered same-origin.

For the Mozilla Security Team:
Christoph Kerschbaumer, Ethan Tseng, Henry Chang & Yoshi Huang

Improving AES-GCM Performance

AES-GCM is a NIST standardised authenticated encryption algorithm (FIPS 800-38D). Since its standardisation in 2008 its usage increased to a point where it is the prevalent encryption used with TLS. With 88% it is by far the most widely used TLS cipher in Firefox.

Firefox telemetry on symmetric ciphers in TLS

Unfortunately the AES-GCM implementation used in Firefox (provided by NSS) until now did not take advantage of full hardware acceleration on all platforms; it used a slower software-only implementation on Mac, Linux 32-bit, or any device that doesn’t have all of the AVX, PCLMUL, and AES-NI hardware instructions. Based on hardware telemetry information, only 30% of Firefox 55 users get full hardware acceleration (as well as the resulting resistance to side channel analysis). In this post I describe how I made AES-GCM in NSS and thus Firefox 56 significantly faster, more side-channel resistant, and more energy efficient on most platforms using hardware support.

To evaluate  the actual impact on Firefox users, I tested the practical speed of our encryption by downloading a large file from a secure site using various hardware configurations:  Downloading a file on a mid-2015 MacBook Pro Retina with Firefox 55 spends 17% of its CPU usage in ssl3_AESGCM, the routine that performs the decryption. On a Windows laptop with an AMD C-70 (without the  AES-NI instruction) Firefox CPU usage is 60% and the download speed is capped at 3.5MB/s. This doesn’t seem to be only an academic issue: Particularly for battery-operated devices, the energy consumption difference would be noticeable.

Improving GCM performance

Speeding up the GCM multiplication function is the first obvious step to improve AES-GCM performance. A bug was opened on integration of the original AES-GCM code to provide an alternative to the textbook implementation of gcm_HashMult. This code is not only slow but has timing side channels as you can see in the following excerpt from the binary multiplication algorithm:

    for (ib = 1; ib < b_used; ib++) {
      b_i = *pb++;

      /* Inner product:  Digits of a */
      if (b_i)
        s_bmul_d_add(MP_DIGITS(a), a_used, b_i, MP_DIGITS(c) + ib);
        MP_DIGIT(c, ib + a_used) = b_i;

We can improve on two fronts here. First NSS should use the PCLMUL hardware instruction to speed up the ghash multiplication if possible. Second if PCLMUL is not available, NSS should use a fast constant-time implementation.

Bug 868948 has several attempts of speeding up the software implementation without introducing timing side-channels. Unfortunately the fastest code that was proposed uses table lookups and is therefore not constant-time (accessing memory locations in the same cache line still leaks timing information). Thanks to Thomas Pornin I re-implemented the binary multiplication in a way that doesn’t leak any timing information and is still faster than any other proposed C code (see Bug 868948 or openssl/boringssl for other software implementations). Check out Thomas’ excellent write-up for details.

If PCLMUL is available on the CPU, using it is the way to go. All modern compilers support intrinsics, which allow us to write “inline assembly” in C that runs on all platforms without having to write assembly code files. A hardware accelerated implementation of the ghash multiplication can be easily implemented with _mm_clmulepi64_si128.

On Mac and Linux the new 32-bit and 64-bit software ghash functions (faster and constant-time) are used on the respective platforms if PCLMUL or AVX is not available. Since Windows doesn’t support 128-bit integers (outside of registers) NSS falls back to the slower 32-bit ghash code – which is still more than 25% faster than the previous ghash implementation.

Improving AES performance

To speed up AES, NSS requires hardware acceleration on Mac as well as on Linux 32-bit and any machine that doesn’t support AVX (or has it disabled). When NSS can’t use the specialised AES code it falls back to a table-based implementation that is again not constant-time (in addition to being slow). There are currently no plans of rewriting the existing fallback code. AES is impossible to implement efficiently in software without introducing side channels. Implementing AES with intrinsics on the other hand is a breeze.

    m = _mm_xor_si128(m, cx->keySchedule[0]);
    for (i = 1; i < cx->Nr; ++i) {
      m = _mm_aesenc_si128(m, cx->keySchedule[i]);
    m = _mm_aesenclast_si128(m, cx->keySchedule[cx->Nr]);
    _mm_storeu_si128((__m128i *)output, m);

Key expansion is a little bit more involved (for 192 and 256 bit), but is written in about 100 lines as well.

Mac sees the biggest improvement here. Previously, only Windows and 64-bit Linux used AES-NI, and now all desktop x86 and x64 platforms use it when available.

Looking at the numbers

To measure the performance gain of the new AES-GCM code I encrypted a 479MB file with a 128-bit key (the most widely used key size for AES-GCM). Note that these numbers are supposed to show a trend and heavily depend on the used machine and system load at the time.

Linux measurements are done on an Intel Core i7-4790, Windows measurements on a Surface Pro 2 with an Intel Core i5-4300U, and Mac mid 2015 Core i7-4980HQ. For all following graphs lower is better.

Linux 64 AES-GCM 128 encryption performance improvements

Linux 32 AES-GCM 128 encryption performance improvements

Performance of AES-GCM 128  on any 64-bit Linux machine without hardware support for the AES, PCLMUL, or AVX instructions is at least twice as fast now. If the AES and PCLMUL instructions are available, the new code only needs 33% of the time the old code took.

The speed-up for 32-bit Linux is more significant as it didn’t previously have any hardware accelerated code. With full hardware acceleration the new code is more than 5 times faster than before. Even in the worst case – when PCLMUL is not available – the speedup is still more than 50%.

The story is similar on Windows, although NSS already had fast code for 32-bit Windows users.

Windows 64 AES-GCM 128 encryption performance improvements


Windows 32 AES-GCM 128 encryption performance improvements


Performance improvements on Mac (64-bit only) range from 60% in the best case to 44% when AES-NI or PCLMUL is not available.

Mac OSX AES-GCM 128 encryption performance improvements

The numbers in Firefox

NSS 3.32 (Firefox 56) ships with the new accelerated AES-GCM code. It provides significantly reduced CPU usage for most TLS connections or higher download rates –  meaning better energy efficiency, too. NSS 3.32 is more intelligent in detecting the CPU’s capabilities and using hardware acceleration whenever possible. Assuming that all intrinsics and mathematical operations (other than division) are constant-time on the CPU, the new code doesn’t have any timing side-channels.

On the very basic laptop with the AMD C-70 download rates increased from ~3MB/s to ~6MB/s, and this is a device that has no hardware acceleration support.

To see the performance improvement we can look at the case where AVX is not available (which is the case for about 2/3 of the Firefox population). Assuming that at least AES-NI and PCLMUL is supported by the CPU we see the CPU usage drop from 15% to 3%.

AES_Decrypt CPU usage with NSS 3.31 without AVX hardware support

AES_Decrypt CPU usage with NSS 3.32 without AVX hardware support

The most immediate effect can be seen on Mac. AES_Decrypt NSS 3.31 used 9% CPU while in NSS 3.32 it uses only 4%.

AES_Decrypt CPU usage with NSS 3.31 on Mac OSX

AES_Decrypt CPU usage with NSS 3.32 on Mac OSX

The most significant performance improvements are summarise din the following table depicting the time in seconds to decrypt a ~500MB file with AES-GCM 128; lower is better.

Linux 32-bit Mac No AVX support
NSS 3.31 (Firefox 55) 20.3 11.5 21.3
NSS 3.32 (Firefox 56) 3.4 4.6 3.5

These improvements to AES-GCM in NSS make Firefox 56 significantly faster, more side-channel resistant, and more energy efficient on most platforms using hardware support.

Verified cryptography for Firefox 57

Traditionally, software is produced in this way: write some code, maybe do some code review, run unit-tests, and then hope it is correct. Hard experience shows that it is very hard for programmers to write bug-free software. These bugs are sometimes caught in manual testing, but many bugs still are exposed to users, and then must be fixed in patches or subsequent versions. This works for most software, but it’s not a great way to write cryptographic software; users expect and deserve assurances that the code providing security and privacy is well written and bug free.

Even innocuous looking bugs in cryptographic primitives can break the security properties of the overall system and threaten user security. Unfortunately, such bugs aren’t uncommon. In just the last year, popular cryptographic libraries have issued dozens of CVEs for bugs in their core cryptographic primitives or for incorrect use of those primitives. These bugs include many memory safety errors, some side-channels leaks, and a few correctness errors, for example, in bignum arithmetic computations… So what can we do?

Fortunately, recent advances in formal verification allow us to significantly improve the situation by building high assurance implementations of cryptographic algorithms. These implementations are still written by hand, but they can be automatically analyzed at compile time to ensure that they are free of broad classes of bugs. The result is that we can have much higher confidence that our implementation is correct and that it respects secure programming rules that would usually be very difficult to enforce by hand.

This is a very exciting development and Mozilla has partnered with INRIA and Project Everest  (Microsoft Research, CMU, INRIA) to bring components from their formally verified HACL* cryptographic library into NSS, the security engine which powers Firefox. We believe that we are the first major Web browser to have formally verified cryptographic primitives.

The first result of this collaboration, an implementation of the Curve25519 key establishment algorithm (RFC7748), has just landed in Firefox Nightly. Curve25519 is widely used for key-exchange in TLS, and was recently standardized by the IETF.  As an additional bonus, besides being formally verified, the HACL* Curve25519 implementation is also almost 20% faster on 64 bit platforms than the existing NSS implementation (19500 scalar multiplications per second instead of 15100) which represents an improvement in both security and performance to our users. We expect to ship this new code as part as our November Firefox 57 release.

Over the next few months, we will be working to incorporate other HACL* algorithms into NSS, and will also have more to say about the details of how the HACL* verification works and how it gets integrated into NSS.

Benjamin Beurdouche, Franziskus Kiefer & Tim Taubert

Mozilla Releases Version 2.5 of Root Store Policy

Recently, Mozilla released version 2.5 of our Root Store Policy, which continues our efforts to improve standards and reinforce public trust in the security of the Web. We are grateful to all those in the security and Certificate Authority (CA) communities who contributed constructively to the discussions surrounding the new provisions.

The changes of greatest note in version 2.5 of our Root Store Policy are as follows:

  • CAs are required to follow industry best practice for securing their networks, for example by conforming to the CA/Browser Forum’s Network Security Guidelines or a successor document.
  • CAs are required to use only those methods of domain ownership validation which are specifically documented in the CA/Browser Forum’s Baseline Requirements version 1.4.1.
  • Additional requirements were added for intermediate certificates that are used to sign certificates for S/MIME. In particular, such intermediate certificates must be name constrained in order to be considered technically-constrained and exempt from being audited and disclosed on the Common CA Database.
  • Clarified that point-in-time audit statements do not replace the required period-of-time assessments. Mozilla continues to require full-surveillance period-of-time audits that must be conducted annually, and successive audit periods must be contiguous.
  • Clarified the information that must be provided in each audit statement, including the distinguished name and SHA-256 fingerprint for each root and intermediate certificate in scope of the audit.
  • CAs are required to follow and be aware of discussions in the forum, where Mozilla’s root program is coordinated, although they are not required to participate.
  • CAs are required at all times to operate in accordance with the applicable Certificate Policy (CP) and Certificate Practice Statement (CPS) documents, which must be reviewed and updated at least once every year.
  • Our policy on root certificates being transferred from one organization or location to another has been updated and included in the main policy. Trust is not transferable; Mozilla will not automatically trust the purchaser of a root certificate to the level it trusted the previous owner.

The differences between versions 2.5 and 2.4.1 may be viewed on Github. (Version 2.4.1 contained exactly the same normative requirements as version 2.4 but was completely reorganized.)

As always, we re-iterate that participation in Mozilla’s CA Certificate Program is at our sole discretion, and we will take whatever steps are necessary to keep our users safe. Nevertheless, we believe that the best approach to safeguard that security is to work with CAs as partners, to foster open and frank communication, and to be diligent in looking for ways to improve.

Mozilla Security Team

Removing Disabled WoSign and StartCom Certificates from Firefox 58

In October 2016, Mozilla announced that, as of Firefox 51, we would stop validating new certificates chaining to the root certificates listed below that are owned by the companies WoSign and StartCom.

The announcement also indicated our intent to eventually completely remove these root certificates from Mozilla’s Root Store, so that we would no longer validate any certificates issued by those roots. That time has now arrived. We plan to release the relevant changes to Network Security Services (NSS) in November, and then the changes will be picked up in Firefox 58, due for release in January 2018. Websites using certificates chaining up to any of the following root certificates need to migrate to another root certificate.

This announcement applies to the root certificates with the following names:

  • CA 沃通根证书
  • Certification Authority of WoSign
  • Certification Authority of WoSign G2
  • CA WoSign ECC Root
  • StartCom Certification Authority
  • StartCom Certification Authority G2

Mozilla Security Team

A Security Audit of Firefox Accounts

FXA-01-reportTo provide transparency into our ongoing efforts to protect your privacy and security on the Internet, we are releasing a security audit of Firefox Accounts (FxA) that Cure53 conducted last fall. At Mozilla, we sponsor security audits of core open source software underpinning the Web and Internet, recently relaunched our web bug bounty program, find and fix vulnerabilities ourselves, and open source our code for anyone to review. Despite being available to more reviewers, open source software is not necessarily reviewed more thoroughly or frequently than closed source software, and the extra attention from third party reviewers can find outstanding issues and vulnerabilities. To augment our other initiatives and improve the overall security of our web services, we engage third party organizations to audit the security and review the code of specific services.

As Firefox’s central authentication service FxA is a natural first target. Its security is critical to millions of users who rely on it to authenticate with our most sensitive services, such as and Sync. Cure53 ran a comprehensive security audit that encompassed the web services powering FxA and the cryptographic protocol used to protect user accounts and data. They identified 15 issues, none of which were exploited or put user data at risk.

We thank Cure53 for reviewing FxA and increasing our trust in the backbone of Firefox’s identity system. The audit is a step toward providing higher quality and more secure services to our users, which we will continue to improve through our various security initiatives. In the rest of this blog post, we discuss the technical details of the four highest severity issues. The report is available here and you can sign up or log into Firefox Accounts on your desktop or mobile device at:


FXA-01-001 HTML injection via unsanitized FxA relier Name

The one issue Cure53 ranked as critical, FXA-01-001 HTML injection via unsanitized FxA relier Name, resulted from displaying the name of a relier without HTML escaping on the relier registration page. This issue was not exploitable from outside Mozilla, because the endpoint for registering new reliers is not open to the public. A strict Content Security Policy (CSP) blocked most Cross-Site-Scripting (XSS) on the page, but an attacker could still exfiltrate sensitive authentication data via scriptless attacks and deface or repurpose the page for phishing. To fix the vulnerability soon after Cure53 reported it to us, we updated the template language to escape all variables and use an explicit naming convention for unescaped variables. Third party relier names are now sanitized and escaped.

FXA-01-004 XSS via unsanitized Output on JSON Endpoints

The first of three issues ranked high, FXA-01-004 XSS via unsanitized Output on JSON Endpoints, affected legacy browsers handling JSON endpoints with user controlled fields in the beginning of the response. For responses like the following:

        "id": "81730c8682f1efa5",
        "name": "<img src=x onerror=alert(1)>",
        "trusted": false,
        "image_uri": "",
        "redirect_uri": "javascript:alert(1)"

an attacker could set the name or redirect_uri such that legacy browsers sniff the initial bytes of a response, incorrectly guess the MIME type as HTML instead of JSON, and execute user defined scripts.  We added the HTTP header X-Content-Type-Options: nosniff (XCTO) to disable MIME type sniffing, and wrote middleware and patches for the web frameworks to unicode escape <, >, and & characters in JSON responses.

FXA-01-014 Weak client-side Key Stretching

The second issue with a high severity ranking, FXA-01-014 Weak client-side Key Stretching, is “a tradeoff between security and efficiency”. The onepw protocol threat model includes an adversary capable of breaking or bypassing TLS. Consequently, we run 1,000 iterations of PBKDF2 on user devices to avoid sending passwords directly to the server, which runs a further 216 scrypt iterations on the PBKDF2-stretched password before storing it. Cure53 recommended storing PBKDF2 passwords with a higher work factor of roughly 256,000 iterations, but concluded “an exact recommendation on the number of iterations cannot be supplied in this instance”. To keep performance acceptable on less powerful devices, we have not increased the work factor yet.

FXA-01-010 Possible RCE if Application is run in a malicious Path

The final high severity issue, FXA-01-010 Possible RCE if Application is run in a malicious Path, affected people running FxA web servers from insecure paths in development mode. The servers exposed an endpoint that executes shell commands to determine the release version and git commit they’re running in development mode. For example, the command below returns the current git commit:

var gitDir = path.resolve(__dirname, '..', '..', '.git')
var cmd = util.format('git --git-dir=%s rev-parse HEAD', gitDir)
exec(cmd, …)

Cure53 noted malicious commands like rm -rf * in the directory path __dirname global would be executed and recommended filtering and quoting parameters. We modified the script to use the cwd option and avoid filtering the parameter entirely:

var cmd = 'git rev-parse HEAD'
exec(cmd, { env: { GIT_CONFIG: gitDir } } ...)

Mozilla does not run servers from insecure paths, but some users host their own FxA services and it is always good to consider malicious input from all sources.


We reviewed the higher ranked issues from the report, circumstances limiting their impact, and how we fixed and addressed them. We invite you to contribute to developing Firefox Accounts and report security issues through our bug bounty program as we continue to improve the security of Firefox Accounts and other core services.

Analysis of the Alexa Top 1M sites

Prior to the release of the Mozilla Observatory a year ago, I ran a scan of the Alexa Top 1M websites. Despite being available for years, the usage rates of modern defensive security technologies was frustratingly low. A lack of tooling combined with poor and scattered documentation had led to there being little awareness around countermeasures such as Content Security Policy (CSP), HTTP Strict Transport Security (HSTS), and Subresource Integrity (SRI).

A few months after the Observatory’s release — and 1.5M Observatory scans later — I reassessed the Top 1M websites. The situation appeared as if it was beginning to improve, with the use of HSTS and CSP up by approximately 50%. But were those improvements simply low-hanging fruit, or has the situation continued to improve over the following months?

Technology April 2016 October 2016 June 2017 % Change
Content Security Policy (CSP) .005%1
Cookies (Secure/HttpOnly)3 3.76% 4.88% 6.50% +33%
Cross-origin Resource Sharing (CORS)4 93.78% 96.21% 96.55% +.4%
HTTPS 29.64% 33.57% 45.80% +36%
HTTP → HTTPS Redirection 5.06%5
Public Key Pinning (HPKP) 0.43% 0.50% 0.71% +42%
 — HPKP Preloaded7 0.41% 0.47% 0.43% -9%
Strict Transport Security (HSTS)8 1.75% 2.59% 4.37% +69%
 — HSTS Preloaded7 .158% .231% .337% +46%
Subresource Integrity (SRI) 0.015%9 0.052%10 0.113%10 +117%
X-Content-Type-Options (XCTO) 6.19% 7.22% 9.41% +30%
X-Frame-Options (XFO)11 6.83% 8.78% 10.98% +25%
X-XSS-Protection (XXSSP)12 5.03% 6.33% 8.12% +28%

The pace of improvement across the web appears to be continuing at an astounding rate. Although a 36% increase in the number of sites that support HTTPS might seem small, the absolute numbers are quite large — it represents over 119,000 websites.

Not only that, but 93,000 of those websites have chosen to be HTTPS by default, with 18,000 of them forbidding any HTTP access at all through the use of HTTP Strict Transport Security.

The sharp jump in the rate of Content Security Policy (CSP) usage is similarly surprising. It can be difficult to implement for a new website, and often requires extensive rearchitecting to retrofit to an existing site, as most of the Alexa Top 1M sites are. Between increasingly improving documentation, advances in CSP3 such as ‘strict-dynamic’, and CSP policy generators such as the Mozilla Laboratory, it appears that we might be turning a corner on CSP usage around the web.

Observatory Grading

Despite this progress, the vast majority of large websites around the web continue to not use Content Security Policy and Subresource Integrity. As these technologies — when properly used — can nearly eliminate huge classes of attacks against sites and their users, they are given a significant amount of weight in Observatory scans.

As a result of their low usage rates amongst established websites, they typically receive failing grades from the Observatory. Nevertheless, I continue to see improvements across the board:

Grade April 2016 October 2016 June 2017 % Change
 A+ .003% .008% .013% +62%
A .006% .012% .029% +142%
B .202% .347% .622% +79%
C .321% .727% 1.38% +90%
D 1.87% 2.82% 4.51% +60%
F 97.60% 96.09% 93.45% -2.8%

As 969,924 scans were successfully completed in the last survey, a decrease in failing grades by 2.8% implies that over 27,000 of the largest sites in the world have improved from a failing grade in the last eight months alone.

In fact, my research indicates that over 50,000 websites around the web have directly used the Mozilla Observatory to improve their grades, indicated by scanning their website, making an improvement, and then scanning their website again. Of these 50,000 websites, over 2,500 have improved all the way from a failing grade to an A or A+ grade.

When I first built the Observatory a year ago at Mozilla, I had never imagined that it would see such widespread use. 3.8M scans across 1.55M unique domains later, it seems to have made a significant difference across the internet. I feel incredibly lucky to work at a company like Mozilla that has provided me with a unique opportunity to work on a tool designed solely to make internet a better place.

Please share the Mozilla Observatory and the Web Security Guidelines so that the web can continue to see improvements over the years to come!



  1. Allows 'unsafe-inline' in neither script-src nor style-src
  2. Allows 'unsafe-inline' in style-src only
  3. Amongst sites that set cookies
  4. Disallows foreign origins from reading the domain’s contents within user’s context
  5. Redirects from HTTP to HTTPS on the same domain, which allows HSTS to be set
  6. Redirects from HTTP to HTTPS, regardless of the final domain
  7. As listed in the Chromium preload list
  8. max-age set to at least six months
  9. Percentage is of sites that load scripts from a foreign origin
  10. Percentage is of sites that load scripts
  11. CSP frame-ancestors directive is allowed in lieu of an XFO header
  12. Strong CSP policy forbidding 'unsafe-inline' is allowed in lieu of an XXSSP header