September 2018 CA Communication

Mozilla has sent a CA Communication to inform Certification Authorities (CAs) who have root certificates included in Mozilla’s program about current events relevant to their membership in our program and to remind them of upcoming deadlines. This CA Communication has been emailed to the Primary Point of Contact (POC) and an email alias for each CA in Mozilla’s program, and they have been asked to respond to the following 7 action items:

  1. Mozilla recently published version 2.6.1 of our Root Store Policy. The first action confirms that CAs have read the new version of the policy.
  2. The second action asks CAs to ensure that their CP/CPS complies with the changes that were made to domain validation requirements in version 2.6.1 of Mozilla’s Root Store Policy.
  3. CAs must confirm that they will comply with the new requirement for intermediate certificates issued after 1-January 2019 to be constrained to prevent use of the same intermediate certificate to issue both SSL and S/MIME certificates.
  4. CAs are reminded in action 4 that Mozilla is now rejecting audit reports that do not comply with section 3.1.4 of Mozilla’s Root Store Policy.
  5. CAs must confirm that they have complied with the 1-August 2018 deadline to discontinue use of BR domain validation methods 1 “Validating the Applicant as a Domain Contact” and 5 “Domain Authorization Document”
  6. CAs are reminded of their obligation to add new intermediate CA certificates to CCADB within one week of certificate creation, and before any such subordinate CA is allowed to issue certificates. Later this year, Mozilla plans to begin preloading the certificate database shipped with Firefox with intermediate certificates disclosed in the CCADB, as an alternative to “AIA chasing”. This is intended to reduce the incidence of “unknown issuer” errors caused by server operators neglecting to include intermediate certificates in their configurations.
  7. In action 7 we are gathering information about the Certificate Transparency (CT) logging practices of CAs. Later this year, Mozilla is planning to use CT logging data to begin testing a new certificate validation mechanism called CRLite which may reduce bandwidth requirements for CAs and increase performance of websites. Note that CRLite does not replace OneCRL which is a revocation list controlled by Mozilla.

The full action items can be read here. Responses to the survey will be automatically and immediately published by the CCADB.

With this CA Communication, we reiterate that participation in Mozilla’s CA Certificate Program is at our sole discretion, and we will take whatever steps are necessary to keep our users safe. Nevertheless, we believe that the best approach to safeguard that security is to work with CAs as partners, to foster open and frank communication, and to be diligent in looking for ways to improve.

Protecting Mozilla’s GitHub Repositories from Malicious Modification

At Mozilla, we’ve been working to ensure our repositories hosted on GitHub are protected from malicious modification. As the recent Gentoo incident demonstrated, such attacks are possible.

Mozilla’s original usage of GitHub was an alternative way to provide access to our source code. Similar to Gentoo, the “source of truth” repositories were maintained on our own infrastructure. While we still do utilize our own infrastructure for much of the Firefox browser code, Mozilla has many projects which exist only on GitHub. While some of those project are just experiments, others are used in production (e.g. Firefox Accounts). We need to protect such “sensitive repositories” against malicious modification, while also keeping the barrier to contribution as low as practical.

This describes the mitigations we have put in place to prevent shipping (or deploying) from a compromised repository. We are sharing both our findings and some tooling to support auditing. These add the protections with minimal disruption to common GitHub workflows.

The risk we are addressing here is the compromise of a GitHub user’s account, via mechanisms unique to GitHub. As the Gentoo and other incidents show, when a user account is compromised, any resource the user has permissions to can be affected.

Overview

GitHub is a wonderful ecosystem with many extensions, or “apps”, to make certain workflows easier. Apps obtain permission from a user to perform actions on their behalf. An app can ask for permissions including modifying or adding additional user credentials. GitHub makes these permission requests transparent, and requires the user to approve via the web interface, but not all users may be conversant with the implications of granting those permissions to an app. They also may not make the connection that approving such permissions for their personal repositories could grant the same for access to any repository across GitHub where they can make changes.

Excessive permissions can expose repositories with sensitive information to risks, without the repository admins being aware of those risks. The best a repository admin can do is detect a fraudulent modification after it has been pushed back to GitHub. Neither GitHub nor git can be configured to prevent or highlight this sort of malicious modification; external monitoring is required.

Implementation

The following are taken from our approach to addressing this concern, with Mozilla specifics removed. As much as possible, we borrow from the web’s best practices, used features of the GitHub platform, and tried to avoid adding friction to the daily developer workflows.

Organization recommendations:

  • 2FA must be required for all members and collaborators.
  • All users, or at least those with elevated permissions:
    • Should have contact methods (email, IM) given to the org owners or repo admins. (GitHub allows Users to hide their contact info for privacy.)
    • Should understand it is their responsibility to inform the org owners or repo admins if they ever suspect their account has been compromised. (E.g. laptop stolen)

Repository recommendations:

  • Sensitive repositories should only be hosted in an organization that follows the recommendations above.
  • Production branches should be identified and configured:
    • To not allow force pushes.
    • Only give commit privileges to a small set of users.
    • Enforce those restrictions on admins & owners as well.
    • Require all commits to be GPG signed, using keys known in advance.

Workflow recommendations:

  • Deployments, releases, and other audit-worthy events, should be marked with a signed tag from a GPG key known in advance.
  • Deployment and release criteria should include an audit of all signed commits and tags to ensure they are signed with the expected keys.

There are some costs to implementing these protections – especially those around the signing of commits. We have developed some internal tooling to help with auditing the configurations, and plan to add tools for auditing commits. Those tools are available in the mozilla-services/GitHub-Audit repository.

Image of README contents

Here’s an example of using the audit tools. First we obtain a local copy of the data we’ll need for the “octo_org” organization, and then we report on each repository:

$ ./get_branch_protections.py octo_org
2018-07-06 13:52:40,584 INFO: Running as ms_octo_cat
2018-07-06 13:52:40,854 INFO: Gathering branch protection data. (calls remaining 4992).
2018-07-06 13:52:41,117 INFO: Starting on org octo_org. (calls remaining 4992).
2018-07-06 13:52:59,116 INFO: Finished gathering branch protection data (calls remaining 4947).

Now with the data cached locally, we can run as many reports as we’d like. For example, we have written one report showing which of the above recommendations are being followed:

$ ./report_branch_status.py --header octo_org.db.json
name,protected,restricted,enforcement,signed,team_used
octo_org/react-starter,True,False,False,False,False
octo_org/node-starter,False,False,False,False,False

We can see that only “octo_org/react-starter” has enabled protection against force pushes on it’s production branch. The final output is in CSV format, for easy pasting into spreadsheets.

How you can help

We are still rolling out these recommendations across our teams, and learning as we go. If you think our Repository Security recommendations are appropriate for your situation, please help us make implementation easier. Add your experience to the Tips ‘n Tricks page, or open issues on our GitHub-Audit repository.

Why we need better tracking protection

Mozilla has recently announced a change in our approach to protecting users against tracking. This announcement came as a result of extensive research, both internally and externally, that shows that users are not in control of how their data is used online. In this post, I describe why we’ve chosen to pursue an approach that blocks tracking by default.

People are uncomfortable with the data collection that happens on the web. The actions we take on the web are deeply personal, and yet we have few options to understand and control the data collection that happens on the web. In fact, research has repeatedly shown that the majority of people dislike the collection of personal data for targeted advertising. They report that they find the data collection invasive, creepy, and scary.

The data collected by trackers can create real harm, including enabling divisive political advertising or shaping health insurance companies’ decisions. These are harms we can’t reasonably expect people to anticipate and take steps to avoid. As such, the web lacks an incentive mechanism for companies to compete on privacy.

Opt-in privacy protections have fallen short. Firefox has always offered a baseline set of protections and allowed people to opt into additional privacy features. In parallel, Mozilla worked with industry groups to develop meaningful privacy standards, such as Do Not Track.

These efforts have not been successful. Do Not Track has seen limited adoption by sites, and many of those that initially respected that signal have stopped honoring it. Industry opt-outs don’t always limit data collection and instead only forbid specific uses of the data; past research has shown that people don’t understand this. In addition, research has shown that people rarely take steps to change their default settings — our own data agrees.

Advanced tracking techniques reduce the effectiveness of traditional privacy controls. Many people take steps to protect themselves online, for example, by clearing their browser cookies. In response, some trackers have developed advanced tracking techniques that are able to identify you without the use of cookies. These include browser fingerprinting and the abuse of browser identity and security features for individual identification.

The impact of these techniques isn’t limited to the the website that uses them; the linking of tracking identifiers through “cookie syncing” means that a single tracker which uses an invasive technique can share the information they uncover with other trackers as well.

The features we’ve announced will significantly improve the status quo, but there’s more work to be done. Keep an eye out for future blog posts from us as we continue to improve Firefox’s protections.

TLS 1.3 Published: in Firefox Today

On friday the IETF published TLS 1.3 as RFC 8446. It’s already shipping in Firefox and you can use it today. This version of TLS incorporates significant improvements in both security and speed.

Transport Layer Security (TLS) is the protocol that powers every secure transaction on the Web. The version of TLS in widest use, TLS 1.2, is ten years old this month and hasn’t really changed that much from its roots in the Secure Sockets Layer (SSL) protocol, designed back in the mid-1990s. Despite the minor number version bump, this isn’t the minor revision it appears to be. TLS 1.3 is a major revision that represents more than 20 years of experience with communication security protocols, and four years of careful work from the standards, security, implementation, and research communities (see Nick Sullivan’s great post for the cool details).

Security

TLS 1.3 incorporates a number of important security improvements.

First, it improves user privacy. In previous versions of TLS, the entire handshake was in the clear which leaked a lot of information, including both the client and server’s identities. In addition, many network middleboxes used this information to enforce network policies and failed if the information wasn’t where they expected it.  This can lead to breakage when new protocol features are introduced. TLS 1.3 encrypts most of the handshake, which provides better privacy and also gives us more freedom to evolve the protocol in the future.

Second, TLS 1.3 removes a lot of outdated cryptography. TLS 1.2 included a pretty wide variety of cryptographic algorithms (RSA key exchange, 3DES, static Diffie-Hellman) and this was the cause of real attacks such as FREAK, Logjam, and Sweet32. TLS 1.3 instead focuses on a small number of well understood primitives (Elliptic Curve Diffie-Hellman key establishment, AEAD ciphers, HKDF).

Finally, TLS 1.3 is designed in cooperation with the academic security community and has benefitted from an extraordinary level of review and analysis.  This included formal verification of the security properties by multiple independent groups; the TLS 1.3 RFC cites 14 separate papers analyzing the security of various aspects of the protocol.

Speed

While computers have gotten much faster, the time data takes to get between two network endpoints is limited by the speed of light and so round-trip time is a limiting factor on protocol performance. TLS 1.3’s basic handshake takes one round-trip (down from two in TLS 1.2) and TLS 1.3 incorporates a “zero round-trip” mode in which the client can send data to the server in its first set of network packets. Put together, this means faster web page loading.

What Now?

TLS 1.3 is already widely deployed: both Firefox and Chrome have fielded “draft” versions. Firefox 61 is already shipping draft-28, which is essentially the same as the final published version (just with a different version number). We expect to ship the final version in Firefox 63, scheduled for October 2018. Cloudflare, Google, and Facebook are running it on their servers today. Our telemetry shows that around 5% of Firefox connections are TLS 1.3. Cloudflare reports similar numbers, and Facebook reports that an astounding 50+% of their traffic is already TLS 1.3!

TLS 1.3 was a big effort with a huge number of contributors., and it’s great to see it finalized. With the publication of the TLS 1.3 RFC we expect to see further deployments from other browsers, servers and toolkits, all of which makes the Internet more secure for everyone.

 

Safe Harbor for Security Bug Bounty Participants

Mozilla established one of the first modern security bug bounty programs back in 2004. Since that time, much of the technology industry has followed our lead and bounty programs have become a critical tool for finding security flaws in the software we all use. But even while these programs have reached broader acceptance, the legal protections afforded to bounty program participants have failed to evolve, putting security researchers at risk and possibly stifling that research.

That is why we are announcing changes to our bounty program policies to better protect security researchers working to improve Firefox and to codify the best practices that we’ve been using.

We often hear of researchers who are concerned that companies or governments may take legal actions against them for their legitimate security research. For example, the Computer Fraud and Abuse Act (CFAA) – essentially the US anti-hacking law that criminalizes unauthorized access to computer systems – could be used to punish bounty participants testing the security of systems and software. Just the potential for legal liability might discourage important security research.

Mozilla has criticized the CFAA for being overly broad and for potentially criminalizing activity intended to improve the security of the web. The policy changes we are making today are intended to create greater clarity for our own bounty program and to remove this legal risk for researchers participating in good faith.

There are two important policy changes we are making. First, we have clarified what is in scope for our bounty program and specifically have called out that bounty participants should not access, modify, delete, or store our users’ data. This is critical because, to protect participants in our bug bounty program, we first have to define the boundaries for bug bounty eligibility.

Second, we are stating explicitly that we will not threaten or bring any legal action against anyone who makes a good faith effort to comply with our bug bounty program. That means we promise not to sue researchers under any law (including the DMCA and CFAA) or under our applicable Terms of Service and Acceptable Use Policy for their research through the bug bounty program, and we consider that security research to be “authorized” under the CFAA.

You can see the full changes we’ve made to our policies in the General Eligibility and Safe Harbor sections of our main bounty page. These changes will help researchers know what to expect from Mozilla and represent an important next step for a program we started more than a decade ago. We want to thank Amit Elazari, who brought this safe harbor issue to our attention and is working to drive change in this space, and Dropbox for the leadership it has shown through recent changes to its vulnerability disclosure policy. We hope that other bounty programs will adopt similar policies.

Update on the Distrust of Symantec TLS Certificates

Firefox 60 (the current release) displays an “untrusted connection” error for any website using a TLS/SSL certificate issued before June 1, 2016 that chains up to a Symantec root certificate. This is part of the consensus proposal for removing trust in Symantec TLS certificates that Mozilla adopted in 2017. This proposal was also adopted by the Google Chrome team, and more recently Apple announced their plan to distrust Symantec TLS certificates. As previously stated, DigiCert’s acquisition of Symantec’s Certification Authority has not changed these plans.

In early March when we last blogged on this topic, roughly 1% of websites were broken in Firefox 60 due to the change described above. Just before the release of Firefox 60 on May 9, 2018, less than 0.15% of websites were impacted – a major improvement in just a few months’ time.

The next phase of the consensus plan is to distrust any TLS certificate that chains up to a Symantec root, regardless of when it was issued (note that there is a small exception for TLS certificates issued by a few intermediate certificates that are managed by certain companies, and this phase does not affect S/MIME certificates). This change is scheduled for Firefox 63, with the following planned release dates:

  • Beta – September 5
  • Release – October 23

We have begun to assess the impact of the upcoming change to Firefox 63. We found that 3.5% of the top 1 million websites are still using Symantec certificates that will be distrusted in September and October (sooner in Firefox Nightly)! This number represents a very significant impact to Firefox users, but it has declined by over 20% in the past two months, and as the Firefox 63 release approaches, we expect the same rapid pace of improvement that we observed with the Firefox 60 release.

We strongly encourage website operators to replace any remaining Symantec TLS certificates immediately to avoid impacting their users as these certificates become distrusted in Firefox Nightly and Beta over the next few months. This upcoming change can already be tested in Firefox Nightly by setting the security.pki.distrust_ca_policy preference to “2” via the Configuration Editor.

Introducing the ASan Nightly Project

Every day, countless Mozillians spend numerous hours testing Firefox to ensure that Firefox users get a stable and secure product. However, no product is bug free and, despite all of our testing efforts, browsers still crash sometimes. When we investigate our crash reports, some of them even look like lingering security issues (e.g. use-after-free or other memory corruptions) but the data we have in these reports is often not sufficient for them to be actionable on their own (i.e. they do not provide enough information for a developer to be able to find and fix the problem). This is particularly true for use-after-free problems and some other types of memory corruptions where the actual crash happens a lot later than the memory violation itself.

In our automated integration and fuzz testing, we have been using AddressSanitizer (ASan), a compile-time instrumentation, very successfully for over 5 years. The information it provides about use-after-free is much more actionable than a simple crash stack: It not only tells you immediately when the violation happens, but also includes the location where the memory was free’d previously.

In order to leverage the combined power of Nightly testing and ASan we have joined them together to form the ASan Nightly Project. For this purpose we made a custom ASan Nightly build that is equipped with a special ASan reporter addon. This addon is capable of collecting and reporting ASan errors back to Mozilla, once they are detected. We launched this project to find errors in the wild and then leverage the ASan error report to identify and fix the problem, even though it might not be reproducible. So far, we made these builds for Linux only, but we are actively working on Windows and Mac builds.

Of course this approach comes with a drawback: While ASan’s performance can almost compete with the performance of a regular build, its already higher memory usage grows the longer you run the browser as ASan needs to retain freed memory for a while in order to detect use-after-free on it. Hence, running such a build requires you to have enough RAM (at least 16 GB is recommended) and to restart the browser once or twice a day to free memory.

However, if you are willing to browse the web using this new Firefox environment, you might be eligible to earn a bug bounty: We will treat the automated reporter submissions as if they were filed in Bugzilla (with no test case) which means that if the issue is 1) an eligible security problem and 2) can be fixed by our developers, you will receive a bug bounty for it. All rules of the Mozilla Bug Bounty Program apply. If you would like to participate, ensure that you read the Bug Bounty section carefully and set the right preference, so your report can be attributed to you.

This project can only succeed if enough people are using it. So if you meet the current requirements, we would be very happy if you joined the project.

Root Store Policy Updated

After several months of discussion on the mozilla.dev.security.policy mailing list, our Root Store Policy governing Certification Authorities (CAs) that are trusted in Mozilla products has been updated. Version 2.6 has an effective date of July 1st, 2018.

More than one dozen issues were addressed in this update, including the following changes:

  • Section 2.2 “Validation Practices” now requires CAs with the email trust bit to clearly disclose their email address validation methods in their CP/CPS.
  • The use of IP Address validation methods defined by the CA has been banned in certain circumstances.
  • Methods used for IP Address validation must now be clearly specified in the CA’s CP/CPS.
  • Section 3.1 “Audits” increases the WebTrust EV minimum version to 1.6.0 and removes ETSI TS 102 042 and 101 456 from the list of acceptable audit schemes in favor of EN 319 411.
  • Section 3.1.4 “Public Audit Information” formalizes the requirement for an English language version of the audit statement supplied by the Auditor.
  • Section 5.2 “Forbidden and Required Practices” moves the existing ban on CA key pair generation for SSL certificates into our policy.
  • After January 1, 2019, CAs will be required to create separate intermediate certificates for issuing SSL and S/MIME certificates. Newly issued Intermediate certificates will need to be restricted with an EKU extension that doesn’t contain anyPolicy, or both serverAuth and emailProtection. Intermediate certificates issued prior to 2019 that do not comply with this requirement may continue to be used to issue new end-entity certificates.
  • Section 5.3.2 “Publicly Disclosed and Audited” clarifies that Mozilla expects newly issued intermediate certificates to be included on the CA’s next periodic audit report. As long as the CA has current audits, no special audit is required when issuing a new intermediate. This matches the requirements in the CA/Browser Forum’s Baseline Requirements (BR) section 8.1.
  • Section 7.1 “Inclusions” adds a requirement that roots being added to Mozilla’s program must have complied with Mozilla’s Root Store Policy from the time that they were created. This effectively means that roots in existence prior to 2014 that did not receive BR audits after 2013 are not eligible for inclusion in Mozilla’s program. Roots with documented BR violations may also be excluded from Mozilla’s root store under this policy.
  • Section 8 “CA Operational Changes” now requires notification when an intermediate CA certificate is transferred to a third party.

A comparison of all the policy changes is available here.

Scanning for breached accounts with k-Anonymity

The new Firefox Monitor service will use anonymized range query API endpoints from Have I Been Pwned (HIBP). This new Firefox feature allows users to check for compromised online accounts while preserving their privacy.

An API request reveals sensitive data about the requesting party.

An API request can reveal subject identifiers like cookies, IP address, etc.

Anonymizing Account Identifiers

Operations like ‘search’ often need plaintext, or simply-hashed data. But, as Cloudflare has described in their own HIBP integration, searching with plain account data introduces privacy & security risks that allow an adversary, or even the service itself, to use the data to breach the searched account.

As an alternative, a user search client could download an entire set of data. Unfortunately this practice discloses all the service data to the client, which could abuse the data of all other users.

Anonymized Data Sharing

To mitigate these risks, Mozilla is working with Troy Hunt – creator and maintainer of HIBP – to use new hash range query API endpoints for breached account data in the Firefox Monitor project.

Hash range queries add k-Anonymity to the data that Mozilla exchanges with HIBP. Data with k-Anonymity protects individuals who are the subjects of the data from re-identification while preserving the utility of the data.

When a user submits their email address to Firefox Monitor, it hashes the plaintext value and sends the first 6 characters to the HIBP API. For example, the value “test@example.com” hashes to 567159d622ffbb50b11b0efd307be358624a26ee. We send this hash prefix to the API endpoint:

GET https://haveibeenpwned.com/api/breachedaccount/range/567159

The API responds with many suffixes and the list of breaches that include the full value:

[
  {
    "HashSuffix": "D622FFBB50B11B0EFD307BE358624A26EE",
    "Websites": [
      "LinkedIn"
    ]
  },
  {
    "HashSuffix": "0000000000000000000000000000000000",
    "Websites": [
      "Dropbox"
    ]
  },
  {
    "HashSuffix": "1111111111111111111111111111111111",
    "Websites": [
      "Adobe",
      "Plex"
    ]
  }
]

When Firefox Monitor receives this response, it loops thru the objects to find which (if any) prefix and breached account HashSuffix equals the the user-submitted hash value. The following pseudo code describes the algorithm in more detail:

if (fullUserHash === userHashPrefix + breachedAccount.HashSuffix)

Using the running example from above, for the first HashSuffix, the expression evaluates to:

if (‘567159D622FFBB50B11B0EFD307BE358624A26EE’ ===
    ‘567159’+‘D622FFBB50B11B0EFD307BE358624A26EE’)

Firefox Monitor discovers that “test@example.com” appears in the LinkedIn breach, but does not disclose plaintext or even hashes of sensitive user data. Further, HIBP does not disclose its entire set of hashes, which allows Firefox users to maintain their privacy, and protects breached users from further exposure.

Brute Force Attacks

Hashed data is still vulnerable to brute-force attacks. An adversary could still loop thru a dictionary of email addresses to find the plaintext of all the range query results. To reduce this attack surface, Firefox Monitor does not store the range queries nor any results in its database. Instead, it caches a user’s results in an encrypted client session. We also monitor our scan endpoint to prevent abuse by an adversary attempting a brute force breached-account enumeration attack against our service.

Helping Subjects of Data Breaches

HIBP contains billions of records of email addresses. Troy has done an outstanding job to raise awareness and educate users about breaches globally. Breached sites embrace HIBP, even self-submitting their breached data. HIBP is there to help victims of data breaches after things go wrong, and Firefox Monitor is extending that help to more people.

Blocking FTP subresource loads within non-FTP documents in Firefox 61

Firefox 61 will block subresource loads that rely on the insecure FTP protocol unless the document itself is an FTP document. For example, Firefox will block FTP subresource loads within HTTP(S) pages.

The File Transfer Protocol (FTP) enables file exchange between computers on a network. While this standard protocol is supported by all major browsers and allows convenient file sharing within a network, it’s one of the oldest protocols in use today and has a number of security issues.

The fundamental underlying problem with FTP is that any data transferred will be unencrypted and hence sent across networks in plain text, allowing attackers to steal, spoof and even modify the data transmitted. To date, many malware distribution campaigns rely on compromising FTP servers, downloading malware on an end users device using the FTP protocol. Further, FTP makes HSTS protection somewhat useless, because the automated upgrading from an unencrypted to an encrypted connection that HSTS promises does not apply to FTP.

Following through to our intent to deprecate non-secure HTTP and aligning with Mozilla’s effort to improve adoption of HTTPS Firefox will block subresource loads, like images, scripts and iframes, relying on the insecure FTP protocol. Starting with Firefox 61, loading an FTP subresource within a non-FTP document will be blocked and the following message will be logged to the console:

For the Mozilla Security Team:
Tom Schuster and Christoph Kerschbaumer