January 2020 CA Communication

Mozilla has sent a CA Communication to inform Certificate Authorities (CAs) who have root certificates included in Mozilla’s program about current events relevant to their membership in our program and to remind them of upcoming deadlines. This CA Communication has been emailed to the Primary Point of Contact (POC) and an email alias for each CA in Mozilla’s program, and they have been asked to respond to the following 7 action items:

  1. Read and fully comply with version 2.7 of Mozilla’s Root Store Policy.
  2. Ensure that their CP and CPS complies with the updated policy section 3.3 requiring the proper use of “No Stipulation” and mapping of policy documents to CA certificates.
  3. Confirm their intent to comply with section 5.2 of Mozilla’s Root Store Policy requiring that new end-entity certificates include an EKU extension expressing their intended usage.
  4. Verify that their audit statements meet Mozilla’s formatting requirements that facilitate automated processing.
  5. Resolve issues with audits for intermediate CA certificates that have been identified by the automated audit report validation system.
  6. Confirm awareness of Mozilla’s Incident Reporting requirements and the intent to provide good incident reports.
  7. Confirm compliance with the current version of the CA/Browser Forum Baseline Requirements.

The full action items can be read here. Responses to the survey will be automatically and immediately published by the CCADB.

With this CA Communication, we reiterate that participation in Mozilla’s CA Certificate Program is at our sole discretion, and we will take whatever steps are necessary to keep our users safe. Nevertheless, we believe that the best approach to safeguard that security is to work with CAs as partners, to foster open and frank communication, and to be diligent in looking for ways to improve.

The End-to-End Design of CRLite

CRLite is a technology to efficiently compress revocation information for the whole Web PKI into a format easily delivered to Web users. It addresses the performance and privacy pitfalls of the Online Certificate Status Protocol (OCSP) while avoiding a need for some administrative decisions on the relative value of one revocation versus another. For details on the background of CRLite, see our first post, Introducing CRLite: All of the Web PKI’s revocations, compressed.

To discuss CRLite’s design, let’s first discuss the input data, and from that we can discuss how the system is made reliable. Continue reading …

Introducing CRLite: All of the Web PKI’s revocations, compressed

CRLite is a technology proposed by a group of researchers at the IEEE Symposium on Security and Privacy 2017 that compresses revocation information so effectively that 300 megabytes of revocation data can become 1 megabyte. It accomplishes this by combining Certificate Transparency data and Internet scan results with cascading Bloom filters, building a data structure that is reliable, easy to verify, and easy to update.

Since December, Firefox Nightly has been shipping with with CRLite, collecting telemetry on its effectiveness and speed. As can be imagined, replacing a network round-trip with local lookups makes for a substantial performance improvement. Mozilla currently updates the CRLite dataset four times per day, although not all updates are currently delivered to clients.

Revocations on the Web PKI: Past and Present

The design of the Web’s Public Key Infrastructure (PKI) included the idea that website certificates would be revocable to indicate that they are no longer safe to trust: perhaps because the server they were used on was being decommissioned, or there had been a security incident. In practice, this has been more of an aspiration, as the imagined mechanisms showed their shortcomings:

  • Certificate Revocation Lists (CRLs) quickly became large, and contained mostly irrelevant data, so web browsers didn’t download them;
  • The Online Certificate Status Protocol (OCSP) was unreliable, and so web browsers had to assume if it didn’t work that the website was still valid.

Since revocation is still crucial for protecting users, browsers built administratively-managed, centralized revocation lists: Firefox’s OneCRL, combined with Safe Browsing’s URL-specific warnings, provide the tools needed to handle major security incidents, but opinions differ on what to do about finer-grained revocation needs and the role of OCSP.

The Unreliability of Online Status Checks

Much has been written on the subject of OCSP reliability, and while reliability has definitely improved in recent years (per Firefox telemetry; failure rate), it still suffers under less-than-perfect network conditions: even among our Beta population, which historically has above-average connectivity, over 7% of OCSP checks time out today.

Because of this, it’s impractical to require OCSP to succeed for a connection to be secure, and in turn, an adversarial monster-in-the-middle (MITM) can simply block OCSP to achieve their ends. For more on this, a couple of classic articles are:

Mozilla has been making improvements in this realm for some time, implementing OCSP Must-Staple, which was designed as a solution to this problem, while continuing to use online status checks whenever there’s no stapled response.

We’ve also made Firefox skip revocation information for short-lived certificates; however, despite improvements in automation, such short-lived certificates still make up a very small portion of the Web PKI, because the majority of certificates are long-lived.

Does Decentralized Revocation Bring Dangers?

The ideal in question is whether a Certificate Authority’s (CA) revocation should be directly relied upon by end-users.

There are legitimate concerns that respecting CA revocations could be a path to enabling CAs to censor websites. This would be particularly troubling in the event of increased consolidation in the CA market. However, at present, if one CA were to engage in censorship, the website operator could go to a different CA.

If censorship concerns do bear out, then Mozilla has the option to use its root store policy to influence the situation in accordance with our manifesto.

Does Decentralized Revocation Bring Value?

Legitimate revocations are either done by the issuing CA because of a security incident or policy violation, or they are done on behalf of the certificate’s owner, for their own purposes. The intention becomes codified to render the certificate unusable, perhaps due to key compromise or service provider change, or as was done in the wake of Heartbleed.

Choosing specific revocations to honor and refusing others dismisses the intentions of all left-behind revocations attempts. For Mozilla, it violates principle 6 of our manifesto, limiting participation in the Web PKI’s security model.

There is a cost to supporting all revocations – checking OCSP:

  1. Slows down our first connection by ~130 milliseconds (CERT_VALIDATION_HTTP_REQUEST_SUCCEEDED_TIME, https://mzl.la/2ogT8TJ),
  2. Fails unsafe, if an adversary is in control of the web connection, and
  3. Periodically reveals to the CA the HTTPS web host that a user is visiting.

Luckily, CRLite gives us the ability to deliver all the revocation knowledge needed to replace OCSP, and do so quickly, compactly, and accurately.

Can CRLite Replace OCSP?

Firefox Nightly users are currently only using CRLite for telemetry, but by changing the preference security.pki.crlite_mode to 2, CRLite can enter “enforcing” mode and respect CRLite revocations for eligible websites. There’s not yet a mode to disable OCSP; there’ll be more on that in subsequent posts.

This blog post is the first in a series discussing the technology for CRLite, the observed effects, and the nature of a collaboration of this magnitude between industry and academia. The next post discusses the end-to-end design of the CRLite mechanism, and why it works. Additionally, some FAQs about CRLite are available on Github.

Firefox 72 blocks third-party fingerprinting resources

Privacy is a human right, and is core to Mozilla’s mission. However many companies on the web erode privacy when they collect a significant amount of personal information. Companies record our browsing history and the actions we take across websites. This practice is known as cross-site tracking, and its harms include unwanted targeted advertising and divisive political messaging.

Last year we launched Enhanced Tracking Protection (ETP) to protect our users from cross-site tracking. In Firefox 72, we are expanding that protection to include a particularly invasive form of cross-site tracking: browser fingerprinting. This is the practice of identifying a user by the unique characteristics of their browser and device. A fingerprinting script might collect the user’s screen size, browser and operating system type, the fonts the user has installed, and other device properties—all to build a unique “fingerprint” that differentiates one user’s browser from another.

Fingerprinting is bad for the web. It allows companies to track users for months, even after users clear their browser storage or use private browsing mode. Despite a near complete agreement between standards bodies and browser vendors that fingerprinting is harmful, its use on the web has steadily increased over the past decade.

We are committed to finding a way to protect users from fingerprinting without breaking the websites they visit. There are two primary ways to protect against fingerprinting: to block parties that participate in fingerprinting, or to change or remove APIs that can be used to fingerprint users.

Firefox 72 protects users against fingerprinting by blocking all third-party requests to companies that are known to participate in fingerprinting. This prevents those parties from being able to inspect properties of a user’s device using JavaScript. It also prevents them from receiving information that is revealed through network requests, such as the user’s IP address or the user agent header.

We’ve partnered with Disconnect to provide this protection. Disconnect maintains a list of companies that participate in cross-site tracking, as well a list as those that fingerprint users. Firefox blocks all parties that meet both criteria [0]. We’ve adapted measurement techniques  from past academic research to help Disconnect discover new fingerprinting domains. Disconnect performs a rigorous, public evaluation of each potential fingerprinting domain before adding it to the blocklist.

Firefox’s blocking of fingerprinting resources represents our first step in stemming the adoption of fingerprinting technologies. The path forward in the fight against fingerprinting will likely involve both script blocking and API-level protections. We will continue to monitor fingerprinting on the web, and will work with Disconnect to build out the set of domains blocked by Firefox. Expect to hear more updates from us as we continue to strengthen the protections provided by ETP.

 

[0] A tracker on Disconnect’s blocklist is any domain in the Advertising, Analytics, Social, Content, or Disconnect category. A fingerprinter is any domain in the Fingerprinting category. Firefox blocks domains in the intersection of these two classifications, i.e., a domain that is both in one of the tracking categories and in the fingerprinting category.

Announcing Version 2.7 of the Mozilla Root Store Policy

After many months of discussion on the mozilla.dev.security.policy mailing list, our Root Store Policy governing Certificate Authorities (CAs) that are trusted in Mozilla products has been updated. Version 2.7 has an effective date of January 1st, 2020.

More than one dozen issues were addressed in this update, including the following changes:

  • Beginning on 1-July, 2020, end-entity certificates MUST include an Extended Key Usage (EKU) extension containing KeyPurposeId(s) describing the intended usage(s) of the certificate, and the EKU extension MUST NOT contain the KeyPurposeId anyExtendedKeyUsage. This requirement is driven by the issues we’ve had with non-TLS certificates that are technically capable of being used for TLS. Some CAs have argued that certificates not intended for TLS usage are not required to comply with TLS policies, however it is not possible to enforce policy based on intention alone.
  • Certificate Policy and Certificate Practice Statement (CP/CPS) versions dated after March 2020 can’t contain blank sections and must – in accordance with RFC 3647 – only use “No Stipulation” to mean that no requirements are imposed. That term cannot be used to mean that the section is “Not Applicable”. For example, “No Stipulation” in section 3.2.2.6 “Wildcard Domain Validation” means that the policy allows wildcard certificates to be issued.
  • Section 8 “Operational Changes” will apply to unconstrained subordinate CA certificates chaining up to root certificates in Mozilla’s program.  With this change, any new unconstrained subordinate CA certificates that are transferred or signed for a different organization that doesn’t already have control of a subordinate CA certificate must undergo a public discussion before issuing certificates.
  • We’ve seen a number of instances in which a CA has multiple policy documents and there is no clear way to determine which policies apply to which certificates. Under our new policy, CAs must provide a way to clearly determine which CP/CPS applies to each root and intermediate certificate. This may require changes to CA’s policy documents.
  • Mozilla already has a “required practice” that forbids delegation of email validation to third parties for S/MIME certificates. With this update, we add this requirement to our policy. Specifically, CAs must not delegate validation of the domain part of an email address to a third party.
  • We’ve also added specific S/MIME revocation requirements to policy in place of the existing unclear requirement for S/MIME certificates to follow the BR 4.9.1 revocation requirements. The new policy does not include specific requirements on the time in which S/MIME certificates must be revoked.

Other changes include:

  • Detail the permitted signature algorithms and encodings for RSA keys and ECDSA keys in sections 5.1.1 and 5.1.2 (along with a note that Firefox does not currently support RSASSA-PSS encodings).
  • Add the P-521 exclusion in section 5.1 of the Mozilla policy to section 2.3 where we list exceptions to the BRs.
  • Change references to “PITRA” in section 8 to “Point-in-Time Audit”, which is what we meant all along.
  • Update required minimum versions of audit criteria in section 3.1
  • Formally require incident reporting

A comparison of all the policy changes is available here.

A few of these changes may require that CAs revise their CP/CPS(s). Mozilla will send a CA Communication to alert CAs of the necessary changes, and ask CAs to provide the date by which their CP/CPS documents will be compliant.

We have also recently updated the Common CA Database (CCADB) Policy to provide specific guidance to CAs and auditors on audit statements. As a repository of information about publicly-trusted CAs, CCADB now automatically processes audit statements submitted by CAs. The requirements added in section 5.1 of the policy help to ensure that the automated processing is successful.

Help Test Firefox’s built-in HTML Sanitizer to protect against UXSS bugs

Help Test Firefox’s built-in HTML Sanitizer to protect against UXSS bugs

I recently gave a talk at OWASP Global AppSec in Amsterdam and summarized the presentation in a blog post about how to achieve “critical”-rated code execution vulnerabilities in Firefox with user-interface XSS. The end of that blog posts encourages the reader to participate the bug bounty program, but did not come with proper instructions. This blog post will describe the mitigations Firefox has in place to protect against XSS bugs and how to test them.

Our about: pages are privileged pages that control the browser (e.g., about:preferences, which contains Firefox settings). A successful XSS exploit has to bypass the Content Security Policy (CSP), which we have recently added but also our built-in XSS sanitizer to gain arbitrary code execution. A bypass of the sanitizer without a CSP bypass is in itself a severe-enough security bug and warrants a bounty, subject to the discretion of the Bounty Committee. See the bounty pages for more information, including how to submit findings.

How the Sanitizer works

The Sanitizer runs in the so-called “fragment parsing” step of innerHTML. In more detail, whenever someone uses innerHTML (or similar functionality that parses a string from JavaScript into HTML) the browser builds a DOM tree data structure. Before the newly parsed structure is appended to the existing DOM element our sanitizer intervenes. This step ensures that our sanitizer can not mismatch the result the actual parser would have created – because it is indeed the actual parser.

The line of code that triggers the sanitizer is in nsContentUtils::ParseFragmentHTML and nsContentUtils::ParseFragmentXML. This aforementioned link points to a specific source code revision, to make hotlinking easier. Please click the file name at the top of the page to get to the newest revision of the source code.

The sanitizer is implemented as an allow-list of elements, attributes and attribute values in nsTreeSanitizer.cpp. Please consult the allow-list before testing.

Finding a Sanitizer bypass is a hunt for Mutated XSS (mXSS) bugs in Firefox – Unless you find an element in our allow-list that has recently become capable of running script.

How and where to test

A browser is a complicated application which consists of millions of lines of code. If you want to find new security issues, you should test the latest development version. We often times rewrite lots of code that isn’t related to the issue you are testing but might still have a side-effect. To make sure your bug is actually going to affect end users, test Firefox Nightly. Otherwise, the issues you find in Beta or Release might have already been fixed in Nightly.

Sanitizer runs in all privileged pages

Some of Firefox’s internal pages have more privileges than regular web pages. For example about:config allows the user to modify advanced browser settings and hence relies on those expanded privileges.

Just open a new tab and navigate to about:config. Because it has access to privileged APIs it can not use innerHTML (and related functionality like outerHTML and so on) without going through the sanitizer.

Using Developer Tools to emulate a vulnerability

From about:config, open The developer tools console (Go to Tools in the menu bar. Select Web Developers, then Web Console (Ctrl+Shift+k)).

To emulate an XSS vulnerability, type this into the console:

document.body.innerHTML = '<img src=x onerror=alert(1)>'

Observe how Firefox sanitizes the HTML markup by looking at the error in the console:

“Removed unsafe attribute. Element: img. Attribute: onerror.”

You may now go and try other variants of XSS against this sanitizer. Again, try finding an mXSS bug or by identifying an allowed combination of element and attribute which execute script.

Finding an actual XSS vulnerability

Right, so for now we have emulated the Cross-Site Scripting (XSS) vulnerability by typing in innerHTML ourselves in the Web Console. That’s pretty much cheating. But as I said above: What we want to find are sanitizer bypasses. This is a call to test our mitigations.

But if you still want to find real XSS bugs in Firefox, I recommend you run some sort of smart static analysis on the Firefox JavaScript code. And by smart, I probably do not mean eslint-plugin-no-unsanitized.

Summary

This blog post described the mitigations Firefox has in place to protect against XSS bugs. These bugs can lead to remote code execution outside of the sandbox. We encourage the wider community to double check our work and look for omissions. This should be particularly interesting for people with a web security background, who want to learn more about browser security. Finding severe security bugs is very rewarding and we’re looking forward to getting some feedback. If you find something, please consult the Bug Bounty pages on how to report it.

Updates to the Mozilla Web Security Bounty Program

Mozilla was one of the first companies to establish a bug bounty program and we continually adjust it so that it stays as relevant now as it always has been. To celebrate the 15 years of the 1.0 release of Firefox, we are making significant enhancements to the web bug bounty program.

Increasing Bounty Payouts

We are doubling all web payouts for critical, core and other Mozilla sites as per the Web and Services Bug Bounty Program page. In addition we are tripling payouts to $15,000 for Remote Code Execution payouts on critical sites!

Adding New Critical Sites to the Program

As we are constantly improving the services behind Firefox, we also need to ensure that sites we consider critical to our mission get the appropriate attention from the security community. Hence we have extended our web bug bounty program by the following sites in the last 6 months:

  • Autograph – a cryptographic signature service that signs Mozilla products.
  • Lando – Mozilla’s new automatic code-landing service which allows us to easily commit Phabricator revisions to their destination repository.
  • Phabricator – a code management tool used for reviewing Firefox code changes.
  • Taskcluster  the task execution framework that supports Mozilla’s continuous integration and release processes (promoted from core to critical).

Adding New Core Sites to the Program

The sites we consider core to our mission have also been extended to include:

  • Firefox Monitor – a site where you can register your email address so that you can be informed if your account details are part of a data breach.
  • Localization – a service contributors can use to help localize Mozilla products.
  • Payment Subscription – a service that is used as the interface in front of the payment provide (Stripe).
  • Firefox Private Network – a site from which you can download a desktop extension that helps secure and protect your connection everywhere you use Firefox.
  • Ship It – a system that accepts requests for releases from humans and translates them into information and requests that our Buildbot-based release automation can process.
  • Speak To Me – Mozilla’s Speech Recognition API.

The new payouts have already been applied to the most recently reported web bugs.

We hope the new sites and increased payments will encourage you to have another look at our sites and help us keep them safe for everyone who uses the web.

Happy Birthday, Firefox. And happy bug hunting to you all!

Adding CodeQL and clang to our Bug Bounty Program

At Github Universe, Github announced the GitHub Security Lab, an initiative to help secure open source software alongside the community and an initial set of partners including Mozilla. As part of this announcement, Github is providing free access to CodeQL, a security research tool which makes it easier to identify flaws in open source software. Mozilla has used these tools privately for the past two years, and have been very impressed and hopeful about how these tools will improve software security. Mozilla recognizes the need to scale security to work automatically, and tighten the feedback loop in the development <-> security auditing/engineering process.

One of the ways we’re supporting this initiative at Mozilla is through renewed investment in automation and static analysis. We think the broader Mozilla community can participate, and we want to encourage it. Today, we’re announcing a new area of our bug bounty program to encourage the community to use the CodeQL tools.  We are exploring the use of CodeQL tools and will award a bounty – above and beyond our existing bounties – for static analysis work that identifies present or historical flaws in Firefox.

The highlights of the bounty are:

  • We will accept static analysis queries written in CodeQL or as clang-based checkers (clang analyzer, clang plugin using the AST API or clang-tidy).
  • Each previously unknown security vulnerability your query matches will be eligible for a bug bounty per the normal policy.
  • The query itself will also be eligible for a bounty, the amount dependent upon the quality of the submission.
  • Queries that match historical issues but do not find new vulnerabilities are eligible. This means you can look through our historical advisories to find examples of issues you can write queries for.
  • Mozilla and Github’s Bug Bounties are compatible not exclusive so if you meet the requirements of both, you are eligible to receive bounties from both. (More details below.)
  • The full details of this program are available at our bug bounty program’s homepage.

When fixing any security bug, retrospective is an important part of the remediation process which should provide answers to the following questions: Was this the only instance of this issue? Is this flaw representative of a wider systemic weakness that needs to be addressed? And most importantly: can we prevent an issue like this from ever occurring again? Variant analysis, driven manually, is usually the way to answer the first two questions. And static analysis, integrated in the development process, is one of the best ways to answer the third.

Besides our existing clang analyzer checks, we’ve made use of CodeQL over the past two years to do variant analysis. This tool allows identifying bugs both in the context of targeted, zero-false-positive queries, and more expansive results where the manual analysis starts from a more complete and less noise-filled point than simple string matching. To see examples of where we’ve successfully used CodeQL, we have a meta tracking bug that illustrates the types of bugs we’ve identified.

We hope that security researchers will try out CodeQL too, and share both their findings and their experience with us. And of course regardless of how you find a vulnerability, you’re always welcome to submit bugs using the regular bug bounty program. So if you have custom static analysis tools, fuzzers, or just the mainstay of grep and coffee – you’re always invited.

Getting Started with CodeQL

Github is publishing a guide covering how to use CodeQL at https://securitylab.github.com/tools/codeql

Getting Started with Clang Analyzer

We currently have a number of custom-written checks in our source tree. So the easiest way to write and run your query is to build Firefox, add ‘ac_add_options –enable-clang-plugin’ to your mozconfig, add your check, and then ‘./mach build’ again.

To learn how to add your check, you can review this recent bug that added a couple of new checks – it shows how to add a new plugin to Checks.inc, ChecksIncludes.inc, and additionally how to add tests. This particular plugin also adds a couple of attributes that can be used in the codebase, which your plugin may or may not need. Note that depending on how you view the diffs, it may appear that the author modified existing files, but actually they copied an existing file, then modified the copy.

Future of CodeQL and clang within our Bug Bounty program

We retain the ability to be flexible. We’re planning to evaluate the effectiveness of the program when we reach $75,000 in rewards or after a year. After all, this is something new for us and for the bug bounty community. We—and Github—welcome your communication and feedback on the plan, especially candid feedback. If you’ve developed a query that you consider more valuable than what you think we’d reward – we would love to hear that. (If you’re keeping the query, hopefully you’re submitting the bugs to us so we can see that we are not meeting researcher expectations on reward.) And if you spent hours trying to write a query but couldn’t get over the learning curve – tell us and show us what problems you encountered!

We’re excited to see what the community can do with CodeQL and clang; and how we can work together to improve on our ability to deliver a browser that answers to no one but you.

Validating Delegated Credentials for TLS in Firefox

At Mozilla we are well aware of how fragile the Web Public Key Infrastructure (PKI) can be. From fraudulent Certification Authorities (CAs) to implementation errors that leak private keys, users, often unknowingly, are put in a position where their ability to establish trust on the Web is compromised. Therefore, in keeping with our mission to create a Web where individuals are empowered, independent and safe, we welcome ideas that are aimed at making the Web PKI more robust. With initiatives like our Common CA Database (CCADB), CRLite prototyping, and our involvement in the CA/Browser Forum, we’re committed to this objective, and this is why we embraced the opportunity to partner with Cloudflare to test Delegated Credentials for TLS in Firefox, which is currently undergoing standardization at the IETF.

As CAs are responsible for the creation of digital certificates, they dictate the lifetime of an issued certificate, as well as its usage parameters. Traditionally, end-entity certificates are long-lived, exhibiting lifetimes of more than one year. For server operators making use of Content Delivery Networks (CDNs) such as Cloudflare, this can be problematic because of the potential trust placed in CDNs regarding sensitive private key material. Of course, Cloudflare has architectural solutions for such key material but these add unwanted latency to connections and present with operational difficulties. To limit exposure, a short-lived certificate would be preferable for this setting. However, constant communication with an external CA to obtain short-lived certificates could result in poor performance or even worse, lack of access to a service entirely.

The Delegated Credentials mechanism decentralizes the problem by allowing a TLS server to issue short-lived authentication credentials (with a validity period of no longer than 7 days) that are cryptographically bound to a CA-issued certificate. These short-lived credentials then serve as the authentication keys in a regular TLS 1.3 connection between a Firefox client and a CDN edge server situated in a low-trust zone (where the risk of compromise might be higher than usual and perhaps go undetected). This way, performance isn’t hindered and the compromise window is limited. For further technical details see this excellent blog post by Cloudflare on the subject.

See How The Experiment Works

We will soon test Delegated Credentials in Firefox Nightly via an experimental addon, called TLS Delegated Credentials Experiment. In this experiment, the addon will make a single request to a Cloudflare-managed host which supports Delegated Credentials. The Delegated Credentials feature is disabled in Firefox by default, but depending on the experiment conditions the addon will toggle it for the duration of this request. The connection result, including whether Delegated Credentials was enabled or not, gets reported via telemetry to allow for comparative study. Out of this we’re hoping to gain better insights into how effective and stable Delegated Credentials are in the real world, and more importantly, of any negative impact to user experience (for example, increased connection failure rates or slower TLS handshake times). The study is expected to start in mid-November and run for two weeks.

For specific details on the telemetry and how measurements will take place, see bug 1564179.

See The Results In Firefox

You can open a Firefox Nightly or Beta window and navigate to about:telemetry. From here, in the top-right is a Search box, where you can search for “delegated” to find all telemetry entries from our experiment. If Delegated Credentials have been used and telemetry is enabled, you can expect to see the count of Delegated Credentials-enabled handshakes as well as the time-to-completion of each. Additionally, if the addon has run the test, you can see the test result under the “Keyed Scalars” section.

Delegated Credentials telemetry in Nightly 72

Delegated Credentials telemetry in Nightly 72

You can also read more about telemetry, studies, and Mozilla’s privacy policy by navigating to about:preferences#privacy.

See It In Action

If you’d like to enable Delegated Credentials for your own testing or use, this can be done by:

  1. In a Firefox Nightly or Beta window, navigate to about:config.
  2. Search for the “security.tls.enable_delegated_credentials” preference – the preference list will update as you type, and “delegated” is itself enough to find the correct preference.
  3. Click the Toggle button to set the value to true.
  4. Navigate to https://dc.crypto.mozilla.org/
  5. If needed, toggling the value back to false will disable Delegated Credentials.

Note that currently, use of Delegated Credentials doesn’t appear anywhere in the Firefox UI. This will change as we evolve the implementation.

We would sincerely like to thank Christopher Patton, fellow Mozillian Wayne Thayer, and the Cloudflare team, particularly Nick Sullivan and Watson Ladd for helping us to get to this point with the Delegated Credentials feature. The Mozilla team will keep you informed on the development of this feature for use in Firefox, and we look forward to sharing our results in a future blog post.

 

 

 

Improved Security and Privacy Indicators in Firefox 70

The upcoming Firefox 70 release will update the security and privacy indicators in the URL bar.

In recent years we have seen a great increase in the number of websites that are delivered securely via HTTPS. At the same time, privacy threats have become more prevalent on the web and Firefox has shipped new technologies to protect our users against tracking.

To better reflect this new environment, the updated UI takes a step towards treating secure HTTPS as the default method of transport for websites, instead of a way to identify website security. It also puts greater emphasis on user privacy.

This post will outline the major changes to our primary security indicators:

  • A new permanent “protections” icon to access information about the restrictions Firefox is applying to the page to protect your privacy.
  • A new crossed-out lock icon as indicator for insecure HTTP and a new color for the lock icon that marks sites delivered securely.
  • A new placement for Extended Validation (EV) indicators.

 

Streamlining Security and Identity Indicators

Firefox traditionally marked sites delivered via a secure transport mechanism with a green lock icon. Sites delivered via insecure mechanisms got no additional security indicators. All sites were marked with an “information” icon, which served as an access point for more site information.

Before and after comparison of new identity icons

As part of the changes in Firefox 70, we will start showing a crossed-out lock icon as permanent indicator for sites delivered via the insecure protocols HTTP and FTP. Over two years ago, we started showing this indicator for insecure login pages. We also announced our intent to expand by showing a negative indicator for all HTTP pages as HTTPS adoption increases. By now, Firefox loads about 80% of pages via HTTPS.

The formerly green lock icon will now become gray, with the intention of de-emphasizing the default (secure) connection state and instead putting more emphasis on broken or insecure connections.

We will remove the “information” icon. The lock icon will be the new entry point for accessing security and identity information about the website.

 

Moving the EV indicator out of the URL Bar

A recent study by Thompson et al. shows that the display of the company name and country in the URL bar when the website is using an Extended Validation TLS certificate does not add any additional security parameters. One of the biggest downsides with this approach is that it requires the user to notice the absence of the EV indicator on a malicious site. Furthermore, it has been demonstrated that EV certificates with colliding entity names can be generated by choosing a different jurisdiction.

As a result, we will relocate the EV indicator to the “Site Information” panel that is accessed by clicking on the lock icon. This change will hide the indicator from the majority of our users while keeping it accessible for those who need to access it. It also avoids ambiguities that could previously arise when the entity name in the URL bar was cut off to make space for the URL.

Image showing the new EV Indicator in the identity panel

 

Adding a new Protections Icon

The protections icon will be the entry point for the privacy properties of every page. It lets the user know about trackers or cryptominers on the page and how Firefox restricts them to improve privacy and performance. The icon will have 3 different states.

An overview of the different protection icons

Protections Enabled
When no tracking activity is detected and protections are not necessary, the shield shows in grey.

Protections Active
When protections are active on the current page, the shield displays a very subtle animation and adopt the purple gradient.

Protections Disabled
When the user has disabled protections for the site, the shield shows with a strike-through.

 

We are excited to roll out this improved new UI and will continue to evolve the indicators to give Firefox users an easy way to assess their privacy and security anywhere on the modern web.

A big thank you to all the individuals that contributed to this effort.