Delaying Further Symantec TLS Certificate Distrust

Due to a long list of documented issues, Mozilla previously announced our intent to distrust TLS certificates issued by the Symantec Certification Authority, which is now a part of DigiCert. On August 13th, the next phase of distrust was enabled in Firefox Nightly. In this phase, all TLS certificates issued by Symantec (including their GeoTrust, RapidSSL, and Thawte brands) are no longer trusted by Firefox (with a few small exceptions).

In my previous update, I pointed out that many popular sites are still using these certificates. They are apparently unaware of the planned distrust despite DigiCert’s outreach, or are waiting until the release date that was communicated in the consensus plan to finally replace their Symantec certificates. While the situation has been improving steadily, our latest data shows well over 1% of the top 1-million websites are still using a Symantec certificate that will be distrusted.

Unfortunately, because so many sites have not yet taken action, moving this change from Firefox 63 Nightly into Beta would impact a significant number of our users. It is unfortunate that so many website operators have waited to update their certificates, especially given that DigiCert is providing replacements for free.

We prioritize the safety of our users and recognize the additional risk caused by a delay in the implementation of the distrust plan. However, given the current situation, we believe that delaying the release of this change until later this year when more sites have replaced their Symantec TLS certificates is in the overall best interest of our users. This change will remain enabled in Nightly, and we plan to enable it in Firefox 64 Beta when it ships in mid-October.

We continue to strongly encourage website operators to replace Symantec TLS certificates immediately. Doing so improves the security of their websites and allows the 10’s of thousands of Firefox Nightly users to access them.

Trusting the delivery of Firefox Updates

Providing a web browser that you can depend on year after year is one of the core tenet of the Firefox security strategy. We put a lot of time and energy into making sure that the software you run has not been tampered with while being delivered to you.

In an effort to increase trust in Firefox, we regularly partner with external firms to verify the security of our products. Earlier this year, we hired X41 D-SEC Gmbh to audit the mechanism by which Firefox ships updates, known internally as AUS for Application Update Service. Today, we are releasing their report.

Four researchers spent a total of 27 days running a technical security review of both the backend service that manages updates (Balrog) and the client code that updates your browser. The scope of the audit included a cryptographic review of the update signing protocol, fuzzing of the client code, pentesting of the backend and manual code review of all components.

Mozilla Security continuously reviews and tests the security of Firefox, but external verification is a critical part of our operations security strategy. We are glad to say that X41 did not find any critical flaw in AUS, but they did find various issues ranking from low to high, as well as 21 side findings.

X41 D-Sec GmbH found the security level of AUS to be good. No critical vulnerabilities have been identified in any of the components. The most serious vulnerabilities that were discovered are a Cross-Site Request Forgery (CSRF) vulnerability in the administration web application interface that might allow attackers to trigger unintended administrative actions under certain conditions. Other vulnerabilities identified were memory corruption issues, insecure handling of untrusted data, and stability issues (Denial of Service (DoS)). Most of these issues were constrained by requiring to bypass cryptographic signatures.

Three vulnerabilities ranked as high, and all of them were located in the administration console of Balrog, the backend service of Firefox AUS, which is protected behind multiple factors of authentication inside our internal network. The extra layers of security effectively lower the risk of the vulnerabilities found by X41, but we fixed the issues they found regardless.

X41 found a handful of bugs in the C code that handles update files. Thankfully, the cryptographic signatures prevent a bad actor from crafting an update file that could impact Firefox. Here again, designing our systems with multiple layers of security has proven useful.

Today, we are making the full report accessible to everyone in an effort to keep Firefox open and transparent. We are also opening up our bug tracker so you can follow our progress in mitigating the issues and side findings identified in the report.

Finally, we’d like to thank X41 for their high quality work on conducting this security audit. And,  as always, we invite you to help us keep Firefox secure by reporting issues through our bug bounty program.

Supporting Referrer Policy for CSS in Firefox 64

The HTTP Referrer Value

Navigating from one webpage to another or requesting a sub-resource within a webpage causes a web browser to send the top-level URL in the HTTP referrer field. Inspecting that HTTP header field on the receiving end allows sites to identify where the request originated which enables sites to log referrer data for operational and statistical purposes. As one can imagine, the top-level URL quite often includes user sensitive information which then might leak through the referrer value impacting an end users privacy.

The Referrer Policy

To compensate, the HTTP Referrer Policy allows webpages to gain more control over referrer values on their site. E.g. using a Referrer Policy of “origin” instructs the web browser to strip any path information and only fill the HTTP referrer value field with the origin of the requesting webpage instead of the entire URL. More aggressively, a Referrer Policy of ‘no-referrer’ advises the browser to suppress the referrer value entirely. Ultimately the Referrer Policy empowers the website author to gain more control over the used referrer value and hence provides a tool for website authors to respect an end users privacy.

Expanding the Referrer Policy to CSS

While Firefox has been supporting Referrer Policy since Firefox 50 we are happy to announce that Firefox will expand policy coverage and will support Referrer Policy within style sheets starting in Firefox 64. With that update in coverage, requests originating from within style sheets will also respect a site’s Referrer Policy and ultimately contribute a cornerstone to a more privacy respecting internet.

For the Mozilla Security and Privacy Team,
  Christoph Kerschbaumer & Thomas Nguyen

September 2018 CA Communication

Mozilla has sent a CA Communication to inform Certification Authorities (CAs) who have root certificates included in Mozilla’s program about current events relevant to their membership in our program and to remind them of upcoming deadlines. This CA Communication has been emailed to the Primary Point of Contact (POC) and an email alias for each CA in Mozilla’s program, and they have been asked to respond to the following 7 action items:

  1. Mozilla recently published version 2.6.1 of our Root Store Policy. The first action confirms that CAs have read the new version of the policy.
  2. The second action asks CAs to ensure that their CP/CPS complies with the changes that were made to domain validation requirements in version 2.6.1 of Mozilla’s Root Store Policy.
  3. CAs must confirm that they will comply with the new requirement for intermediate certificates issued after 1-January 2019 to be constrained to prevent use of the same intermediate certificate to issue both SSL and S/MIME certificates.
  4. CAs are reminded in action 4 that Mozilla is now rejecting audit reports that do not comply with section 3.1.4 of Mozilla’s Root Store Policy.
  5. CAs must confirm that they have complied with the 1-August 2018 deadline to discontinue use of BR domain validation methods 1 “Validating the Applicant as a Domain Contact” and 5 “Domain Authorization Document”
  6. CAs are reminded of their obligation to add new intermediate CA certificates to CCADB within one week of certificate creation, and before any such subordinate CA is allowed to issue certificates. Later this year, Mozilla plans to begin preloading the certificate database shipped with Firefox with intermediate certificates disclosed in the CCADB, as an alternative to “AIA chasing”. This is intended to reduce the incidence of “unknown issuer” errors caused by server operators neglecting to include intermediate certificates in their configurations.
  7. In action 7 we are gathering information about the Certificate Transparency (CT) logging practices of CAs. Later this year, Mozilla is planning to use CT logging data to begin testing a new certificate validation mechanism called CRLite which may reduce bandwidth requirements for CAs and increase performance of websites. Note that CRLite does not replace OneCRL which is a revocation list controlled by Mozilla.

The full action items can be read here. Responses to the survey will be automatically and immediately published by the CCADB.

With this CA Communication, we reiterate that participation in Mozilla’s CA Certificate Program is at our sole discretion, and we will take whatever steps are necessary to keep our users safe. Nevertheless, we believe that the best approach to safeguard that security is to work with CAs as partners, to foster open and frank communication, and to be diligent in looking for ways to improve.

Protecting Mozilla’s GitHub Repositories from Malicious Modification

At Mozilla, we’ve been working to ensure our repositories hosted on GitHub are protected from malicious modification. As the recent Gentoo incident demonstrated, such attacks are possible.

Mozilla’s original usage of GitHub was an alternative way to provide access to our source code. Similar to Gentoo, the “source of truth” repositories were maintained on our own infrastructure. While we still do utilize our own infrastructure for much of the Firefox browser code, Mozilla has many projects which exist only on GitHub. While some of those project are just experiments, others are used in production (e.g. Firefox Accounts). We need to protect such “sensitive repositories” against malicious modification, while also keeping the barrier to contribution as low as practical.

This describes the mitigations we have put in place to prevent shipping (or deploying) from a compromised repository. We are sharing both our findings and some tooling to support auditing. These add the protections with minimal disruption to common GitHub workflows.

The risk we are addressing here is the compromise of a GitHub user’s account, via mechanisms unique to GitHub. As the Gentoo and other incidents show, when a user account is compromised, any resource the user has permissions to can be affected.


GitHub is a wonderful ecosystem with many extensions, or “apps”, to make certain workflows easier. Apps obtain permission from a user to perform actions on their behalf. An app can ask for permissions including modifying or adding additional user credentials. GitHub makes these permission requests transparent, and requires the user to approve via the web interface, but not all users may be conversant with the implications of granting those permissions to an app. They also may not make the connection that approving such permissions for their personal repositories could grant the same for access to any repository across GitHub where they can make changes.

Excessive permissions can expose repositories with sensitive information to risks, without the repository admins being aware of those risks. The best a repository admin can do is detect a fraudulent modification after it has been pushed back to GitHub. Neither GitHub nor git can be configured to prevent or highlight this sort of malicious modification; external monitoring is required.


The following are taken from our approach to addressing this concern, with Mozilla specifics removed. As much as possible, we borrow from the web’s best practices, used features of the GitHub platform, and tried to avoid adding friction to the daily developer workflows.

Organization recommendations:

  • 2FA must be required for all members and collaborators.
  • All users, or at least those with elevated permissions:
    • Should have contact methods (email, IM) given to the org owners or repo admins. (GitHub allows Users to hide their contact info for privacy.)
    • Should understand it is their responsibility to inform the org owners or repo admins if they ever suspect their account has been compromised. (E.g. laptop stolen)

Repository recommendations:

  • Sensitive repositories should only be hosted in an organization that follows the recommendations above.
  • Production branches should be identified and configured:
    • To not allow force pushes.
    • Only give commit privileges to a small set of users.
    • Enforce those restrictions on admins & owners as well.
    • Require all commits to be GPG signed, using keys known in advance.

Workflow recommendations:

  • Deployments, releases, and other audit-worthy events, should be marked with a signed tag from a GPG key known in advance.
  • Deployment and release criteria should include an audit of all signed commits and tags to ensure they are signed with the expected keys.

There are some costs to implementing these protections – especially those around the signing of commits. We have developed some internal tooling to help with auditing the configurations, and plan to add tools for auditing commits. Those tools are available in the mozilla-services/GitHub-Audit repository.

Image of README contents

Here’s an example of using the audit tools. First we obtain a local copy of the data we’ll need for the “octo_org” organization, and then we report on each repository:

$ ./ octo_org
2018-07-06 13:52:40,584 INFO: Running as ms_octo_cat
2018-07-06 13:52:40,854 INFO: Gathering branch protection data. (calls remaining 4992).
2018-07-06 13:52:41,117 INFO: Starting on org octo_org. (calls remaining 4992).
2018-07-06 13:52:59,116 INFO: Finished gathering branch protection data (calls remaining 4947).

Now with the data cached locally, we can run as many reports as we’d like. For example, we have written one report showing which of the above recommendations are being followed:

$ ./ --header octo_org.db.json

We can see that only “octo_org/react-starter” has enabled protection against force pushes on it’s production branch. The final output is in CSV format, for easy pasting into spreadsheets.

How you can help

We are still rolling out these recommendations across our teams, and learning as we go. If you think our Repository Security recommendations are appropriate for your situation, please help us make implementation easier. Add your experience to the Tips ‘n Tricks page, or open issues on our GitHub-Audit repository.

Why we need better tracking protection

Mozilla has recently announced a change in our approach to protecting users against tracking. This announcement came as a result of extensive research, both internally and externally, that shows that users are not in control of how their data is used online. In this post, I describe why we’ve chosen to pursue an approach that blocks tracking by default.

People are uncomfortable with the data collection that happens on the web. The actions we take on the web are deeply personal, and yet we have few options to understand and control the data collection that happens on the web. In fact, research has repeatedly shown that the majority of people dislike the collection of personal data for targeted advertising. They report that they find the data collection invasive, creepy, and scary.

The data collected by trackers can create real harm, including enabling divisive political advertising or shaping health insurance companies’ decisions. These are harms we can’t reasonably expect people to anticipate and take steps to avoid. As such, the web lacks an incentive mechanism for companies to compete on privacy.

Opt-in privacy protections have fallen short. Firefox has always offered a baseline set of protections and allowed people to opt into additional privacy features. In parallel, Mozilla worked with industry groups to develop meaningful privacy standards, such as Do Not Track.

These efforts have not been successful. Do Not Track has seen limited adoption by sites, and many of those that initially respected that signal have stopped honoring it. Industry opt-outs don’t always limit data collection and instead only forbid specific uses of the data; past research has shown that people don’t understand this. In addition, research has shown that people rarely take steps to change their default settings — our own data agrees.

Advanced tracking techniques reduce the effectiveness of traditional privacy controls. Many people take steps to protect themselves online, for example, by clearing their browser cookies. In response, some trackers have developed advanced tracking techniques that are able to identify you without the use of cookies. These include browser fingerprinting and the abuse of browser identity and security features for individual identification.

The impact of these techniques isn’t limited to the the website that uses them; the linking of tracking identifiers through “cookie syncing” means that a single tracker which uses an invasive technique can share the information they uncover with other trackers as well.

The features we’ve announced will significantly improve the status quo, but there’s more work to be done. Keep an eye out for future blog posts from us as we continue to improve Firefox’s protections.

TLS 1.3 Published: in Firefox Today

On friday the IETF published TLS 1.3 as RFC 8446. It’s already shipping in Firefox and you can use it today. This version of TLS incorporates significant improvements in both security and speed.

Transport Layer Security (TLS) is the protocol that powers every secure transaction on the Web. The version of TLS in widest use, TLS 1.2, is ten years old this month and hasn’t really changed that much from its roots in the Secure Sockets Layer (SSL) protocol, designed back in the mid-1990s. Despite the minor number version bump, this isn’t the minor revision it appears to be. TLS 1.3 is a major revision that represents more than 20 years of experience with communication security protocols, and four years of careful work from the standards, security, implementation, and research communities (see Nick Sullivan’s great post for the cool details).


TLS 1.3 incorporates a number of important security improvements.

First, it improves user privacy. In previous versions of TLS, the entire handshake was in the clear which leaked a lot of information, including both the client and server’s identities. In addition, many network middleboxes used this information to enforce network policies and failed if the information wasn’t where they expected it.  This can lead to breakage when new protocol features are introduced. TLS 1.3 encrypts most of the handshake, which provides better privacy and also gives us more freedom to evolve the protocol in the future.

Second, TLS 1.3 removes a lot of outdated cryptography. TLS 1.2 included a pretty wide variety of cryptographic algorithms (RSA key exchange, 3DES, static Diffie-Hellman) and this was the cause of real attacks such as FREAK, Logjam, and Sweet32. TLS 1.3 instead focuses on a small number of well understood primitives (Elliptic Curve Diffie-Hellman key establishment, AEAD ciphers, HKDF).

Finally, TLS 1.3 is designed in cooperation with the academic security community and has benefitted from an extraordinary level of review and analysis.  This included formal verification of the security properties by multiple independent groups; the TLS 1.3 RFC cites 14 separate papers analyzing the security of various aspects of the protocol.


While computers have gotten much faster, the time data takes to get between two network endpoints is limited by the speed of light and so round-trip time is a limiting factor on protocol performance. TLS 1.3’s basic handshake takes one round-trip (down from two in TLS 1.2) and TLS 1.3 incorporates a “zero round-trip” mode in which the client can send data to the server in its first set of network packets. Put together, this means faster web page loading.

What Now?

TLS 1.3 is already widely deployed: both Firefox and Chrome have fielded “draft” versions. Firefox 61 is already shipping draft-28, which is essentially the same as the final published version (just with a different version number). We expect to ship the final version in Firefox 63, scheduled for October 2018. Cloudflare, Google, and Facebook are running it on their servers today. Our telemetry shows that around 5% of Firefox connections are TLS 1.3. Cloudflare reports similar numbers, and Facebook reports that an astounding 50+% of their traffic is already TLS 1.3!

TLS 1.3 was a big effort with a huge number of contributors., and it’s great to see it finalized. With the publication of the TLS 1.3 RFC we expect to see further deployments from other browsers, servers and toolkits, all of which makes the Internet more secure for everyone.


Safe Harbor for Security Bug Bounty Participants

Mozilla established one of the first modern security bug bounty programs back in 2004. Since that time, much of the technology industry has followed our lead and bounty programs have become a critical tool for finding security flaws in the software we all use. But even while these programs have reached broader acceptance, the legal protections afforded to bounty program participants have failed to evolve, putting security researchers at risk and possibly stifling that research.

That is why we are announcing changes to our bounty program policies to better protect security researchers working to improve Firefox and to codify the best practices that we’ve been using.

We often hear of researchers who are concerned that companies or governments may take legal actions against them for their legitimate security research. For example, the Computer Fraud and Abuse Act (CFAA) – essentially the US anti-hacking law that criminalizes unauthorized access to computer systems – could be used to punish bounty participants testing the security of systems and software. Just the potential for legal liability might discourage important security research.

Mozilla has criticized the CFAA for being overly broad and for potentially criminalizing activity intended to improve the security of the web. The policy changes we are making today are intended to create greater clarity for our own bounty program and to remove this legal risk for researchers participating in good faith.

There are two important policy changes we are making. First, we have clarified what is in scope for our bounty program and specifically have called out that bounty participants should not access, modify, delete, or store our users’ data. This is critical because, to protect participants in our bug bounty program, we first have to define the boundaries for bug bounty eligibility.

Second, we are stating explicitly that we will not threaten or bring any legal action against anyone who makes a good faith effort to comply with our bug bounty program. That means we promise not to sue researchers under any law (including the DMCA and CFAA) or under our applicable Terms of Service and Acceptable Use Policy for their research through the bug bounty program, and we consider that security research to be “authorized” under the CFAA.

You can see the full changes we’ve made to our policies in the General Eligibility and Safe Harbor sections of our main bounty page. These changes will help researchers know what to expect from Mozilla and represent an important next step for a program we started more than a decade ago. We want to thank Amit Elazari, who brought this safe harbor issue to our attention and is working to drive change in this space, and Dropbox for the leadership it has shown through recent changes to its vulnerability disclosure policy. We hope that other bounty programs will adopt similar policies.

Update on the Distrust of Symantec TLS Certificates

Firefox 60 (the current release) displays an “untrusted connection” error for any website using a TLS/SSL certificate issued before June 1, 2016 that chains up to a Symantec root certificate. This is part of the consensus proposal for removing trust in Symantec TLS certificates that Mozilla adopted in 2017. This proposal was also adopted by the Google Chrome team, and more recently Apple announced their plan to distrust Symantec TLS certificates. As previously stated, DigiCert’s acquisition of Symantec’s Certification Authority has not changed these plans.

In early March when we last blogged on this topic, roughly 1% of websites were broken in Firefox 60 due to the change described above. Just before the release of Firefox 60 on May 9, 2018, less than 0.15% of websites were impacted – a major improvement in just a few months’ time.

The next phase of the consensus plan is to distrust any TLS certificate that chains up to a Symantec root, regardless of when it was issued (note that there is a small exception for TLS certificates issued by a few intermediate certificates that are managed by certain companies, and this phase does not affect S/MIME certificates). This change is scheduled for Firefox 63, with the following planned release dates:

  • Beta – September 5
  • Release – October 23

We have begun to assess the impact of the upcoming change to Firefox 63. We found that 3.5% of the top 1 million websites are still using Symantec certificates that will be distrusted in September and October (sooner in Firefox Nightly)! This number represents a very significant impact to Firefox users, but it has declined by over 20% in the past two months, and as the Firefox 63 release approaches, we expect the same rapid pace of improvement that we observed with the Firefox 60 release.

We strongly encourage website operators to replace any remaining Symantec TLS certificates immediately to avoid impacting their users as these certificates become distrusted in Firefox Nightly and Beta over the next few months. This upcoming change can already be tested in Firefox Nightly by setting the security.pki.distrust_ca_policy preference to “2” via the Configuration Editor.

Introducing the ASan Nightly Project

Every day, countless Mozillians spend numerous hours testing Firefox to ensure that Firefox users get a stable and secure product. However, no product is bug free and, despite all of our testing efforts, browsers still crash sometimes. When we investigate our crash reports, some of them even look like lingering security issues (e.g. use-after-free or other memory corruptions) but the data we have in these reports is often not sufficient for them to be actionable on their own (i.e. they do not provide enough information for a developer to be able to find and fix the problem). This is particularly true for use-after-free problems and some other types of memory corruptions where the actual crash happens a lot later than the memory violation itself.

In our automated integration and fuzz testing, we have been using AddressSanitizer (ASan), a compile-time instrumentation, very successfully for over 5 years. The information it provides about use-after-free is much more actionable than a simple crash stack: It not only tells you immediately when the violation happens, but also includes the location where the memory was free’d previously.

In order to leverage the combined power of Nightly testing and ASan we have joined them together to form the ASan Nightly Project. For this purpose we made a custom ASan Nightly build that is equipped with a special ASan reporter addon. This addon is capable of collecting and reporting ASan errors back to Mozilla, once they are detected. We launched this project to find errors in the wild and then leverage the ASan error report to identify and fix the problem, even though it might not be reproducible. So far, we made these builds for Linux only, but we are actively working on Windows and Mac builds.

Of course this approach comes with a drawback: While ASan’s performance can almost compete with the performance of a regular build, its already higher memory usage grows the longer you run the browser as ASan needs to retain freed memory for a while in order to detect use-after-free on it. Hence, running such a build requires you to have enough RAM (at least 16 GB is recommended) and to restart the browser once or twice a day to free memory.

However, if you are willing to browse the web using this new Firefox environment, you might be eligible to earn a bug bounty: We will treat the automated reporter submissions as if they were filed in Bugzilla (with no test case) which means that if the issue is 1) an eligible security problem and 2) can be fixed by our developers, you will receive a bug bounty for it. All rules of the Mozilla Bug Bounty Program apply. If you would like to participate, ensure that you read the Bug Bounty section carefully and set the right preference, so your report can be attributed to you.

This project can only succeed if enough people are using it. So if you meet the current requirements, we would be very happy if you joined the project.