Firefox 75 will respect ‘nosniff’ for Page Loads

Prior to being able to display a web page within a browser the rendering engine checks and verifies the MIME type of the document being loaded. In case of an html page, for example, the rendering engine expects a MIME type of ‘text/html’. Unfortunately, time and time again, misconfigured web servers  incorrectly use a MIME type which does not match the actual type of the resource. If strictly enforced, this mismatch in types would downgrade a users experience. More precisely, the rendering engine within a browser will try to interpret the resource based on the ruleset for the provided MIME type and at some point simply would have to give up trying to display the resource. To compensate, Firefox implements a MIME type sniffing algorithm – amongst other techniques Firefox inspects the initial bytes of a file and searches for ‘Magic-Numbers’ which allows it to determine the MIME type of a file independently of the one set by the server.

Whilst sniffing of the MIME type of a document improves the browsing experience for the majority of users, it also enables so-called MIME confusion attacks. In more detail, imagine an application which allows hosting of  images. Let’s further assume the application allows users to upload ‘.jpg’ files but fails to correctly verify that users of that application actually upload only valid .jpg files. An attacker could craft an ‘evil.jpg’ file containing valid html and upload that through the application. The innocent victim of that application solely expects images to be displayed. Within the browser, however, the MIME sniffer steps in and determines that the file contains valid html and overrides the MIME type to load the file like any other page within the application. Additionally, embedded JavaScript fragments within that page will be treated as same-origin and hence be granted the same permissions as the host application. In turn, the granted permissions allow the attacker to gain access to confidential user information.

To mitigate such MIME confusion attacks Firefox expands support of the header ‘X-Content-Type-Options: nosniff’ to page loads (view specification). Firefox has been supporting ‘XCTO: nosniff’ for JavaScript and CSS resources since Firefox 50 and starting with Firefox 75 will use the provided MIME type for page loads even if incorrect.

Left: Firefox 74 ignoring XCTO, sniffing HTML, and executing script.
Right: Firefox 75 respecting XCTO, and defaulting to plaintext.

If the provided MIME type does not match the content, Firefox will not sniff the MIME type but will show an error to the user instead. As illustrated above, if no MIME type was provided at all, Firefox will try to use plaintext or prompt a download . Showing the user an error in the case of a detected MIME type mismatch instead of trying to sniff and render a potentially malicious page allows Firefox to mitigate such MIME confusion attacks.

Multi-Account Containers Add-on Sync Feature

Image of the Multi-Account Containers Sync On-boarding ScreenThe Multi-Account Containers Add-on will now sync your container configuration and site assignments.

Firefox Multi-Account Containers allows users to separate their online identities into different tab types called Containers. Each Container has its own separate storage and cookies.  This way, their browsing activity in one Container is not accessible to websites in other Containers. This privacy feature allows users to assign sites to only open in a specific Container. For instance, it permits them to set your shopping websites to always open in a Shopping Container. This keeps advertising tracking data from those websites separate from the user’s Work Container. Users can also use Containers for separate areas of their life, like work and personal email. The user can separate email accounts from the same provider, so they don’t have to log in and out of each account. For more information about how to use the containers add on, visit the Mozilla support page.

The new sync feature will align Multi-Account Containers on different computers. The add-on carries over Container names, colors, icons, and site assignments on to any other machines with the same Firefox account.

If you have allowed automatic updates of the add-on, your extension should update on its own. The first time you click the Multi-Account Container icon after the update, an on-boarding panel will allow you to activate sync.

In order to use this feature, you will need to be signed in to a Firefox account in Firefox.

CRLite: Speeding Up Secure Browsing

CRLite pushes bulk certificate revocation information to Firefox users, reducing the need to actively query such information one by one. Additionally this new technology eliminates the privacy leak that individual queries can bring, and does so for the whole Web, not just special parts of it. The first two posts in this series about the newly-added CRLite technology provide background: Introducing CRLite: All of the Web PKI’s revocations, compressed and The End-to-End Design of CRLite.

Since mid-December, our pre-release Firefox Nightly users have been evaluating our CRLite system while performing normal web browsing. Gathering information through Firefox Telemetry has allowed us to verify the effectiveness of CRLite.

The questions we particularly wanted to ask about Firefox when using CRLite are:

  1. What were the results of checking the CRLite filter?
    1. Did it find the certificate was too new for the installed CRLite filter;
    2. Was the certificate valid, revoked, or not included;
    3. Was the CRLite filter unavailable?
  2. How quickly did the CRLite filter check return compared to actively querying status using the Online Certificate Status Protocol (OCSP)?

How Well Does CRLite Work?

With Telemetry enabled in Firefox Nightly, each invocation of CRLite emits one of these results:

  • Certificate Valid, indicating that CRLite authoritatively returned that the certificate was valid.
  • Certificate Revoked, indicating that CRLite authoritatively returned that the certificate was revoked.
  • Issuer Not Enrolled, meaning the certificate being evaluated wasn’t included in the CRLite filter set, likely because the issuing Certificate Authority (CA) did not publish CRLs.
  • Certificate Too New, meaning the certificate being evaluated was newer than the CRLite filter.
  • Filter Not Available, meaning that the CRLite filter either had not yet been downloaded from Remote Settings, or had become so stale as to be out-of-service.
Show that >50% of TLS connections would have been using CRLite

Figure 1: One month of CRLite results in Firefox Nightly (5 December 2019 to 6 Jan 2020)

Immediately, one sees that over 54% of secure connections (500M) could have benefited from the improved privacy and performance of CRLite for Firefox Nightly users.

Of the other data:

  • We plan to publish updates up to 4 times per day, which will reduce the incidence of the Certificate Too New result.
  • The Filter Not Available bucket correlates well with independent telemetry indicating a higher-than-expected level of download issues retrieving CRLite filters from Remote Settings; work to improve that is underway.
  • Certificates Revoked but used actively on the Web PKI are, and should be, rare. This number is in-line with other Firefox Nightly telemetry for TLS connection results.

How Much Faster is CRLite?

In contrast to OCSP which requires a network round-trip to complete before a web page can load, CRLite needs only to perform a handful of hash calculations and memory or disk lookups. We expected that CRLite would generally outperform OCSP, but to confirm we added measurements and let OCSP and CRLite race each other in Firefox Nightly.

Show that CRLite is faster than existing technology 99% of the time

Figure 2: How often is CRLite faster than OCSP? (11 December 2019 to 6 January 2020)

Over the month of data, CRLite was faster to query than OCSP 98.844% of the time.

CRLite is Faster 99% of the Time

The speedup of CRLite versus OCSP was rather stark; 56% of the time, CRLite was over 100 milliseconds faster than OCSP, which is a substantial and perceptible improvement in browsing performance.

Distribution of speedups of CRLite

Figure 3: Distribution of occurrences where CRLite outperformed OCSP, which was 99% of CRLite operations. [source]

Almost 10% of the collected data reports showed an entire second of speedup, indicating that the OCSP reached the default timeout. The delay in this figure shows time spent where a Firefox user is waiting for the page to start loading, so this has a substantial impact to perceived quickness in the browser.

To verify that outlier at the timeout, our OCSP telemetry probe shows that over the same period, 9.9% of OCSP queries timed out:

10% of OCSP queries time out

Figure 4: Results of Live OCSP queries in Firefox Nightly [source]

Generally speaking, when loading a website where OCSP wasn’t already cached, 10% of the time Firefox users pause for a full second before the site loads, and they don’t even get revocation data in exchange for the wait.

The 1% When OCSP is Faster

The 500k times that OCSP was faster than CRLite, it was generally not much faster: 50% of these occasions it was less than 40 milliseconds faster. Only 20% of the occasions found OCSP 100 milliseconds faster.

Distribution of slowdowns from CRLite

Figure 5: Distribution of occurrences where OCSP outperformed CRLite, which was 1% of CRLite operations. [source]

Interesting as this is, it represents only 1% of CRLite invocations for Firefox Nightly users in this time period. Almost 99% of CRLite operations were faster, much faster.

Much Faster, More Private, and Still Secure

Our study confirmed that CRLite will maintain the integrity of our live revocation checking mechanisms while also speeding up TLS connections.

At this point it’s clear that CRLite lets us keep checking certificate revocations in the Web PKI without compromising on speed, and the remaining areas for improvement are on shrinking our update files closer to the ideal described in the original CRLite paper.

In the upcoming Part 4 of this series, we’ll discuss the architecture of the CRLite back-end infrastructure, the iteration from the initial research prototype, and interesting challenges of working in a “big data” context for the Web PKI.

In Part 5 of this series, we will discuss what we’re doing to make CRLite as robust and as safe as possible.

January 2020 CA Communication

Mozilla has sent a CA Communication to inform Certificate Authorities (CAs) who have root certificates included in Mozilla’s program about current events relevant to their membership in our program and to remind them of upcoming deadlines. This CA Communication has been emailed to the Primary Point of Contact (POC) and an email alias for each CA in Mozilla’s program, and they have been asked to respond to the following 7 action items:

  1. Read and fully comply with version 2.7 of Mozilla’s Root Store Policy.
  2. Ensure that their CP and CPS complies with the updated policy section 3.3 requiring the proper use of “No Stipulation” and mapping of policy documents to CA certificates.
  3. Confirm their intent to comply with section 5.2 of Mozilla’s Root Store Policy requiring that new end-entity certificates include an EKU extension expressing their intended usage.
  4. Verify that their audit statements meet Mozilla’s formatting requirements that facilitate automated processing.
  5. Resolve issues with audits for intermediate CA certificates that have been identified by the automated audit report validation system.
  6. Confirm awareness of Mozilla’s Incident Reporting requirements and the intent to provide good incident reports.
  7. Confirm compliance with the current version of the CA/Browser Forum Baseline Requirements.

The full action items can be read here. Responses to the survey will be automatically and immediately published by the CCADB.

With this CA Communication, we reiterate that participation in Mozilla’s CA Certificate Program is at our sole discretion, and we will take whatever steps are necessary to keep our users safe. Nevertheless, we believe that the best approach to safeguard that security is to work with CAs as partners, to foster open and frank communication, and to be diligent in looking for ways to improve.

The End-to-End Design of CRLite

CRLite is a technology to efficiently compress revocation information for the whole Web PKI into a format easily delivered to Web users. It addresses the performance and privacy pitfalls of the Online Certificate Status Protocol (OCSP) while avoiding a need for some administrative decisions on the relative value of one revocation versus another. For details on the background of CRLite, see our first post, Introducing CRLite: All of the Web PKI’s revocations, compressed.

To discuss CRLite’s design, let’s first discuss the input data, and from that we can discuss how the system is made reliable. Continue reading …

Introducing CRLite: All of the Web PKI’s revocations, compressed

CRLite is a technology proposed by a group of researchers at the IEEE Symposium on Security and Privacy 2017 that compresses revocation information so effectively that 300 megabytes of revocation data can become 1 megabyte. It accomplishes this by combining Certificate Transparency data and Internet scan results with cascading Bloom filters, building a data structure that is reliable, easy to verify, and easy to update.

Since December, Firefox Nightly has been shipping with with CRLite, collecting telemetry on its effectiveness and speed. As can be imagined, replacing a network round-trip with local lookups makes for a substantial performance improvement. Mozilla currently updates the CRLite dataset four times per day, although not all updates are currently delivered to clients.

Revocations on the Web PKI: Past and Present

The design of the Web’s Public Key Infrastructure (PKI) included the idea that website certificates would be revocable to indicate that they are no longer safe to trust: perhaps because the server they were used on was being decommissioned, or there had been a security incident. In practice, this has been more of an aspiration, as the imagined mechanisms showed their shortcomings:

  • Certificate Revocation Lists (CRLs) quickly became large, and contained mostly irrelevant data, so web browsers didn’t download them;
  • The Online Certificate Status Protocol (OCSP) was unreliable, and so web browsers had to assume if it didn’t work that the website was still valid.

Since revocation is still crucial for protecting users, browsers built administratively-managed, centralized revocation lists: Firefox’s OneCRL, combined with Safe Browsing’s URL-specific warnings, provide the tools needed to handle major security incidents, but opinions differ on what to do about finer-grained revocation needs and the role of OCSP.

The Unreliability of Online Status Checks

Much has been written on the subject of OCSP reliability, and while reliability has definitely improved in recent years (per Firefox telemetry; failure rate), it still suffers under less-than-perfect network conditions: even among our Beta population, which historically has above-average connectivity, over 7% of OCSP checks time out today.

Because of this, it’s impractical to require OCSP to succeed for a connection to be secure, and in turn, an adversarial monster-in-the-middle (MITM) can simply block OCSP to achieve their ends. For more on this, a couple of classic articles are:

Mozilla has been making improvements in this realm for some time, implementing OCSP Must-Staple, which was designed as a solution to this problem, while continuing to use online status checks whenever there’s no stapled response.

We’ve also made Firefox skip revocation information for short-lived certificates; however, despite improvements in automation, such short-lived certificates still make up a very small portion of the Web PKI, because the majority of certificates are long-lived.

Does Decentralized Revocation Bring Dangers?

The ideal in question is whether a Certificate Authority’s (CA) revocation should be directly relied upon by end-users.

There are legitimate concerns that respecting CA revocations could be a path to enabling CAs to censor websites. This would be particularly troubling in the event of increased consolidation in the CA market. However, at present, if one CA were to engage in censorship, the website operator could go to a different CA.

If censorship concerns do bear out, then Mozilla has the option to use its root store policy to influence the situation in accordance with our manifesto.

Does Decentralized Revocation Bring Value?

Legitimate revocations are either done by the issuing CA because of a security incident or policy violation, or they are done on behalf of the certificate’s owner, for their own purposes. The intention becomes codified to render the certificate unusable, perhaps due to key compromise or service provider change, or as was done in the wake of Heartbleed.

Choosing specific revocations to honor and refusing others dismisses the intentions of all left-behind revocations attempts. For Mozilla, it violates principle 6 of our manifesto, limiting participation in the Web PKI’s security model.

There is a cost to supporting all revocations – checking OCSP:

  1. Slows down our first connection by ~130 milliseconds (CERT_VALIDATION_HTTP_REQUEST_SUCCEEDED_TIME, https://mzl.la/2ogT8TJ),
  2. Fails unsafe, if an adversary is in control of the web connection, and
  3. Periodically reveals to the CA the HTTPS web host that a user is visiting.

Luckily, CRLite gives us the ability to deliver all the revocation knowledge needed to replace OCSP, and do so quickly, compactly, and accurately.

Can CRLite Replace OCSP?

Firefox Nightly users are currently only using CRLite for telemetry, but by changing the preference security.pki.crlite_mode to 2, CRLite can enter “enforcing” mode and respect CRLite revocations for eligible websites. There’s not yet a mode to disable OCSP; there’ll be more on that in subsequent posts.

This blog post is the first in a series discussing the technology for CRLite, the observed effects, and the nature of a collaboration of this magnitude between industry and academia. The next post discusses the end-to-end design of the CRLite mechanism, and why it works. Additionally, some FAQs about CRLite are available on Github.

Firefox 72 blocks third-party fingerprinting resources

Privacy is a human right, and is core to Mozilla’s mission. However many companies on the web erode privacy when they collect a significant amount of personal information. Companies record our browsing history and the actions we take across websites. This practice is known as cross-site tracking, and its harms include unwanted targeted advertising and divisive political messaging.

Last year we launched Enhanced Tracking Protection (ETP) to protect our users from cross-site tracking. In Firefox 72, we are expanding that protection to include a particularly invasive form of cross-site tracking: browser fingerprinting. This is the practice of identifying a user by the unique characteristics of their browser and device. A fingerprinting script might collect the user’s screen size, browser and operating system type, the fonts the user has installed, and other device properties—all to build a unique “fingerprint” that differentiates one user’s browser from another.

Fingerprinting is bad for the web. It allows companies to track users for months, even after users clear their browser storage or use private browsing mode. Despite a near complete agreement between standards bodies and browser vendors that fingerprinting is harmful, its use on the web has steadily increased over the past decade.

We are committed to finding a way to protect users from fingerprinting without breaking the websites they visit. There are two primary ways to protect against fingerprinting: to block parties that participate in fingerprinting, or to change or remove APIs that can be used to fingerprint users.

Firefox 72 protects users against fingerprinting by blocking all third-party requests to companies that are known to participate in fingerprinting. This prevents those parties from being able to inspect properties of a user’s device using JavaScript. It also prevents them from receiving information that is revealed through network requests, such as the user’s IP address or the user agent header.

We’ve partnered with Disconnect to provide this protection. Disconnect maintains a list of companies that participate in cross-site tracking, as well a list as those that fingerprint users. Firefox blocks all parties that meet both criteria [0]. We’ve adapted measurement techniques  from past academic research to help Disconnect discover new fingerprinting domains. Disconnect performs a rigorous, public evaluation of each potential fingerprinting domain before adding it to the blocklist.

Firefox’s blocking of fingerprinting resources represents our first step in stemming the adoption of fingerprinting technologies. The path forward in the fight against fingerprinting will likely involve both script blocking and API-level protections. We will continue to monitor fingerprinting on the web, and will work with Disconnect to build out the set of domains blocked by Firefox. Expect to hear more updates from us as we continue to strengthen the protections provided by ETP.

 

[0] A tracker on Disconnect’s blocklist is any domain in the Advertising, Analytics, Social, Content, or Disconnect category. A fingerprinter is any domain in the Fingerprinting category. Firefox blocks domains in the intersection of these two classifications, i.e., a domain that is both in one of the tracking categories and in the fingerprinting category.

Announcing Version 2.7 of the Mozilla Root Store Policy

After many months of discussion on the mozilla.dev.security.policy mailing list, our Root Store Policy governing Certificate Authorities (CAs) that are trusted in Mozilla products has been updated. Version 2.7 has an effective date of January 1st, 2020.

More than one dozen issues were addressed in this update, including the following changes:

  • Beginning on 1-July, 2020, end-entity certificates MUST include an Extended Key Usage (EKU) extension containing KeyPurposeId(s) describing the intended usage(s) of the certificate, and the EKU extension MUST NOT contain the KeyPurposeId anyExtendedKeyUsage. This requirement is driven by the issues we’ve had with non-TLS certificates that are technically capable of being used for TLS. Some CAs have argued that certificates not intended for TLS usage are not required to comply with TLS policies, however it is not possible to enforce policy based on intention alone.
  • Certificate Policy and Certificate Practice Statement (CP/CPS) versions dated after March 2020 can’t contain blank sections and must – in accordance with RFC 3647 – only use “No Stipulation” to mean that no requirements are imposed. That term cannot be used to mean that the section is “Not Applicable”. For example, “No Stipulation” in section 3.2.2.6 “Wildcard Domain Validation” means that the policy allows wildcard certificates to be issued.
  • Section 8 “Operational Changes” will apply to unconstrained subordinate CA certificates chaining up to root certificates in Mozilla’s program.  With this change, any new unconstrained subordinate CA certificates that are transferred or signed for a different organization that doesn’t already have control of a subordinate CA certificate must undergo a public discussion before issuing certificates.
  • We’ve seen a number of instances in which a CA has multiple policy documents and there is no clear way to determine which policies apply to which certificates. Under our new policy, CAs must provide a way to clearly determine which CP/CPS applies to each root and intermediate certificate. This may require changes to CA’s policy documents.
  • Mozilla already has a “required practice” that forbids delegation of email validation to third parties for S/MIME certificates. With this update, we add this requirement to our policy. Specifically, CAs must not delegate validation of the domain part of an email address to a third party.
  • We’ve also added specific S/MIME revocation requirements to policy in place of the existing unclear requirement for S/MIME certificates to follow the BR 4.9.1 revocation requirements. The new policy does not include specific requirements on the time in which S/MIME certificates must be revoked.

Other changes include:

  • Detail the permitted signature algorithms and encodings for RSA keys and ECDSA keys in sections 5.1.1 and 5.1.2 (along with a note that Firefox does not currently support RSASSA-PSS encodings).
  • Add the P-521 exclusion in section 5.1 of the Mozilla policy to section 2.3 where we list exceptions to the BRs.
  • Change references to “PITRA” in section 8 to “Point-in-Time Audit”, which is what we meant all along.
  • Update required minimum versions of audit criteria in section 3.1
  • Formally require incident reporting

A comparison of all the policy changes is available here.

A few of these changes may require that CAs revise their CP/CPS(s). Mozilla will send a CA Communication to alert CAs of the necessary changes, and ask CAs to provide the date by which their CP/CPS documents will be compliant.

We have also recently updated the Common CA Database (CCADB) Policy to provide specific guidance to CAs and auditors on audit statements. As a repository of information about publicly-trusted CAs, CCADB now automatically processes audit statements submitted by CAs. The requirements added in section 5.1 of the policy help to ensure that the automated processing is successful.

Help Test Firefox’s built-in HTML Sanitizer to protect against UXSS bugs

Help Test Firefox’s built-in HTML Sanitizer to protect against UXSS bugs

I recently gave a talk at OWASP Global AppSec in Amsterdam and summarized the presentation in a blog post about how to achieve “critical”-rated code execution vulnerabilities in Firefox with user-interface XSS. The end of that blog posts encourages the reader to participate the bug bounty program, but did not come with proper instructions. This blog post will describe the mitigations Firefox has in place to protect against XSS bugs and how to test them.

Our about: pages are privileged pages that control the browser (e.g., about:preferences, which contains Firefox settings). A successful XSS exploit has to bypass the Content Security Policy (CSP), which we have recently added but also our built-in XSS sanitizer to gain arbitrary code execution. A bypass of the sanitizer without a CSP bypass is in itself a severe-enough security bug and warrants a bounty, subject to the discretion of the Bounty Committee. See the bounty pages for more information, including how to submit findings.

How the Sanitizer works

The Sanitizer runs in the so-called “fragment parsing” step of innerHTML. In more detail, whenever someone uses innerHTML (or similar functionality that parses a string from JavaScript into HTML) the browser builds a DOM tree data structure. Before the newly parsed structure is appended to the existing DOM element our sanitizer intervenes. This step ensures that our sanitizer can not mismatch the result the actual parser would have created – because it is indeed the actual parser.

The line of code that triggers the sanitizer is in nsContentUtils::ParseFragmentHTML and nsContentUtils::ParseFragmentXML. This aforementioned link points to a specific source code revision, to make hotlinking easier. Please click the file name at the top of the page to get to the newest revision of the source code.

The sanitizer is implemented as an allow-list of elements, attributes and attribute values in nsTreeSanitizer.cpp. Please consult the allow-list before testing.

Finding a Sanitizer bypass is a hunt for Mutated XSS (mXSS) bugs in Firefox – Unless you find an element in our allow-list that has recently become capable of running script.

How and where to test

A browser is a complicated application which consists of millions of lines of code. If you want to find new security issues, you should test the latest development version. We often times rewrite lots of code that isn’t related to the issue you are testing but might still have a side-effect. To make sure your bug is actually going to affect end users, test Firefox Nightly. Otherwise, the issues you find in Beta or Release might have already been fixed in Nightly.

Sanitizer runs in all privileged pages

Some of Firefox’s internal pages have more privileges than regular web pages. For example about:config allows the user to modify advanced browser settings and hence relies on those expanded privileges.

Just open a new tab and navigate to about:config. Because it has access to privileged APIs it can not use innerHTML (and related functionality like outerHTML and so on) without going through the sanitizer.

Using Developer Tools to emulate a vulnerability

From about:config, open The developer tools console (Go to Tools in the menu bar. Select Web Developers, then Web Console (Ctrl+Shift+k)).

To emulate an XSS vulnerability, type this into the console:

document.body.innerHTML = '<img src=x onerror=alert(1)>'

Observe how Firefox sanitizes the HTML markup by looking at the error in the console:

“Removed unsafe attribute. Element: img. Attribute: onerror.”

You may now go and try other variants of XSS against this sanitizer. Again, try finding an mXSS bug or by identifying an allowed combination of element and attribute which execute script.

Finding an actual XSS vulnerability

Right, so for now we have emulated the Cross-Site Scripting (XSS) vulnerability by typing in innerHTML ourselves in the Web Console. That’s pretty much cheating. But as I said above: What we want to find are sanitizer bypasses. This is a call to test our mitigations.

But if you still want to find real XSS bugs in Firefox, I recommend you run some sort of smart static analysis on the Firefox JavaScript code. And by smart, I probably do not mean eslint-plugin-no-unsanitized.

Summary

This blog post described the mitigations Firefox has in place to protect against XSS bugs. These bugs can lead to remote code execution outside of the sandbox. We encourage the wider community to double check our work and look for omissions. This should be particularly interesting for people with a web security background, who want to learn more about browser security. Finding severe security bugs is very rewarding and we’re looking forward to getting some feedback. If you find something, please consult the Bug Bounty pages on how to report it.

Updates to the Mozilla Web Security Bounty Program

Mozilla was one of the first companies to establish a bug bounty program and we continually adjust it so that it stays as relevant now as it always has been. To celebrate the 15 years of the 1.0 release of Firefox, we are making significant enhancements to the web bug bounty program.

Increasing Bounty Payouts

We are doubling all web payouts for critical, core and other Mozilla sites as per the Web and Services Bug Bounty Program page. In addition we are tripling payouts to $15,000 for Remote Code Execution payouts on critical sites!

Adding New Critical Sites to the Program

As we are constantly improving the services behind Firefox, we also need to ensure that sites we consider critical to our mission get the appropriate attention from the security community. Hence we have extended our web bug bounty program by the following sites in the last 6 months:

  • Autograph – a cryptographic signature service that signs Mozilla products.
  • Lando – Mozilla’s new automatic code-landing service which allows us to easily commit Phabricator revisions to their destination repository.
  • Phabricator – a code management tool used for reviewing Firefox code changes.
  • Taskcluster  the task execution framework that supports Mozilla’s continuous integration and release processes (promoted from core to critical).

Adding New Core Sites to the Program

The sites we consider core to our mission have also been extended to include:

  • Firefox Monitor – a site where you can register your email address so that you can be informed if your account details are part of a data breach.
  • Localization – a service contributors can use to help localize Mozilla products.
  • Payment Subscription – a service that is used as the interface in front of the payment provide (Stripe).
  • Firefox Private Network – a site from which you can download a desktop extension that helps secure and protect your connection everywhere you use Firefox.
  • Ship It – a system that accepts requests for releases from humans and translates them into information and requests that our Buildbot-based release automation can process.
  • Speak To Me – Mozilla’s Speech Recognition API.

The new payouts have already been applied to the most recently reported web bugs.

We hope the new sites and increased payments will encourage you to have another look at our sites and help us keep them safe for everyone who uses the web.

Happy Birthday, Firefox. And happy bug hunting to you all!