The end of SHA-1 on the Public Web

Our deprecation plan for the SHA-1 algorithm in the public Web, first announced in 2015, is drawing to a close. Today a team of researchers from CWI Amsterdam and Google revealed the first practical collision for SHA-1, affirming the insecurity of the algorithm and reinforcing our judgment that it must be retired from security use on the Web.

As announced last fall, we’ve been disabling SHA-1 for increasing numbers of Firefox users since the release of Firefox 51 using a gradual phase-in technique. Tomorrow, this deprecation policy will reach all Firefox users. It is enabled by default in Firefox 52.

Phasing out SHA-1 in Firefox will affect people accessing websites that have not yet migrated to SHA-2 certificates, well under 0.1% of Web traffic. In parallel to phasing out insecure cryptography from Firefox, we will continue our outreach efforts to help website operators use modern and secure HTTPS.

Users should always make sure to update to the latest version of Firefox for the most-recent security updates and features by going to https://www.mozilla.org/firefox.

Questions about Mozilla policies related to SHA-1 based certificates should be directed to the mozilla.dev.security.policy forum.

Mozilla Security Bytes, ep. 1: Content Security Policy

Sharing the work we do around web and information security is an important role of Mozilla Security. We often get questions on specific security technologies, both from our engineers who work on Mozilla products and services, and from the community interested in using these technologies in their own environments.

Today, we introduce a new podcast, called Mozilla Security Bytes, mozilla_security_byteswhere we discuss these security technologies in details.

In this first episode, we talk about Content Security Policy, or CSP, with Christoph Kerschbaumer, Frederik Braun and Dylan Hardison. We cover both the history and future of CSP, and various issues we have learned while implementing it on our own sites and services.

We hope you find the discussion interesting, and feel free to share your feedback with us at security-bytes@mozilla.com.

Download link, Podcast subscription URL

 Links mentioned in the episode

Setting a Baseline for Web Security Controls

Securing modern web applications effectively is a complex process. However there are many straightforward security controls such as HTTP security headers which are very effective at blocking web common attacks.

At Mozilla we provide Security Guidelines as well as a Checklist of security controls for the developers of Firefox Services. Last year, we introduced the Mozilla Observatory, a hosted scanner to evaluate the security of websites and services. In this blog post, we present the ZAP Baseline scan designed to test the security controls of web applications in Continuous Integration and Continuous Deployment (CI/CD).

Verifying that the correct controls are in place across all of our applications can be challenging, especially in a CI/CD environment. We run full vulnerability scans against our services on a regular basis, but these can take a long time to run and are not really adapted to fast release cycles.

This is why we have introduced a ‘baseline’ scan which runs very quickly and on every release of a service, but still gives us crucial feedback about the key security controls that we are concerned about. The baseline scans are included in the CI/CD pipelines of Firefox services to inform developers of potential issues before they reach production environments. We also run it against all of those services every day to generate a dashboard of the overall security status of our services.

This blog post presents the techniques we use to implement baseline scans in our infrastructure.

The ZAP Baseline Scan

Mozilla invests heavily in the development and support of security tools. The author of this blog post leads the OWASP Zed Attack Proxy (ZAP) project,  which we use to run baseline and vulnerability scans.

The ZAP baseline scan is a quick, easy and highly configurable way to test the security controls you care about. The tests are non intrusive so they are safe to run against production applications.
You don’t need to have ZAP installed – the zap-baseline.py script uses Docker and is included in the 2 ZAP Docker images:

For our baseline scans we use the weekly docker image which has more options available – you can run the script with the -h flag to see all of them.

The script zap-baseline.py uses the ZAP spider to explore the application, by default for just one minute. Spidering the application is important to verify that all pages, and not only the top one, implement the required security controls. This is particularly useful when web frameworks will handle some of the pages automatically without setting the headers.
The script will then report all of the potential issues found.

The baseline scan can be run against an application by just specifying its URL using the -t flag:

docker run owasp/zap2docker-weekly zap-baseline.py -t https://www.example.com

This will produce output like: 

Total of 3 URLs
PASS: Cookie No HttpOnly Flag [10010]
PASS: Cookie Without Secure Flag [10011]
PASS: Password Autocomplete in Browser [10012]
PASS: Cross-Domain JavaScript Source File Inclusion [10017]
PASS: Content-Type Header Missing [10019]
PASS: Information Disclosure - Debug Error Messages [10023]
PASS: Information Disclosure - Sensitive Informations in URL [10024]
PASS: Information Disclosure - Sensitive Information in HTTP Referrer Header [10025]
PASS: HTTP Parameter Override [10026]
PASS: Information Disclosure - Suspicious Comments [10027]
PASS: Viewstate Scanner [10032]
PASS: Secure Pages Include Mixed Content [10040]
PASS: Weak Authentication Method [10105]
PASS: Absence of Anti-CSRF Tokens [10202]
PASS: Private IP Disclosure [2]
PASS: Session ID in URL Rewrite [3]
PASS: Script Passive Scan Rules [50001]
PASS: Insecure JSF ViewState [90001]
PASS: Charset Mismatch [90011]
PASS: Application Error Disclosure [90022]
PASS: WSDL File Passive Scanner [90030]
PASS: Loosely Scoped Cookie [90033]
WARN: Incomplete or No Cache-control and Pragma HTTP Header Set [10015] x 1
    https://www.example.com
WARN: Web Browser XSS Protection Not Enabled [10016] x 3
    https://www.example.com
    https://www.example.com/robots.txt
    https://www.example.com/sitemap.xml
WARN: X-Frame-Options Header Not Set [10020] x 1
    https://www.example.com
WARN: X-Content-Type-Options Header Missing [10021] x 1
    https://www.example.com
FAIL-NEW: 0    FAIL-INPROG: 0    WARN-NEW: 4    WARN-INPROG: 0    INFO: 0    IGNORE: 0    PASS: 22

By default the output lists all of the passive scan rules applied and whether they passed or failed.

You can change how the baseline handles different errors by specifying a rule configuration file via either the -c flag (for a local file) or the -u flag for a remote URL.
You can also generate a default file using the ‘g’ option: https://github.com/zaproxy/community-scripts/blob/master/api/mass-baseline/mass-baseline-default.conf
As specified in the generated file header you can change any of the “WARN”s to “IGNORE” or “FAIL”.

The script will exit with a 0 if there are no issues, 1 if there are any failures or 2 if there are just warnings. The return value can therefore be used in CI tools like Jenkins, CircleCI, TravisCI, etc. to fail a build step.

For example, the configuration below shows how the baseline scan can run in CircleCI with every pull request:

test:
  override:
    # build and run an application container
    - docker build -t myrepo/myapp
    - docker run myrepo/myapp &
    # retrieve the ZAP container
    - docker pull owasp/zap2docker-weekly
    # run the baseline scan against the application
    - |
      docker run -t owasp/zap2docker-weekly zap-baseline.py \
      -t http://172.17.0.2:8080/ -m 3 -i

Scanning Multiple Sites

The baseline scan is a great way to check that a single site meets your base security requirements.

In order to run the ZAP Baseline scan against a large number of websites, we have written a set of wrapper scripts specific to Mozilla. You can find  generic versions of these scripts in the ZAP community-scripts repository.

You will need to customize these scripts as detailed in the README:

  • Change the sites listed in mass-baseline.sh
  • Change the relevant user and repo details in mass-baseline.sh
  • Build a docker image
  • Run the docker image, setting the credentials for your user

These scripts will then generate a summary dashboard in your repo wiki:

Baseline-summary
The ‘Status’ badge is a link to a page containing the latest baseline results for the relevant application and the ‘History’ date links to a page which show all of the previous scans.
Example pages are included on the community scripts wiki: https://github.com/zaproxy/community-scripts/wiki/Baseline-Summary

Tuning

The baseline scan is highly configurable and allows you to fine tune the scanning to handle your applications more effectively.
You can do things like:

  • Increase the time spent spidering your application
  • Use the Ajax Spider in addition to the standard ZAP spider to handle applications that make heavy use of JavaScript
  • Include alpha passive scan rules as well as the beta and release quality ones used by default
  • Ignore specific URLs or even ignore specific issues on those pages
  • Link known issues to a bugtracker URL
  • Specify any of the options supported on the ZAP command line

For more details see the ZAP wiki: https://github.com/zaproxy/zaproxy/wiki/ZAP-Baseline-Scan

Conclusion

The baseline scan gives us immediate feedback about the security controls in place across all of our web applications. The scans run on every commit so that we are immediately aware if there has been any regression. The dashboard allows us to track the state of our applications and the CI integration provides the ability to block a deployment if the baseline is not met.

Integrating baseline scanning in CI/CD helps us work more closely with developers and operators. We don’t force our security tools onto DevOps processes, we integrate security into DevOps. The net effect is better collaboration between teams, and faster turnaround on fixing security issues.

Communicating the Dangers of Non-Secure HTTP

Password Field with Warning Drop Down

HTTPS, the secure variant of the HTTP protocol, has long been a staple of the modern Web. It creates secure connections by providing authentication and encryption between a browser and the associated web server. HTTPS helps keep you safe from eavesdropping and tampering when doing everything from online banking to communicating with your friends. This is important because over a regular HTTP connection, someone else on the network can read or modify the website before you see it, putting you at risk.

To keep users safe online, we would like to see all developers use HTTPS for their websites. Using HTTPS is now easier than ever. Amazing progress in HTTPS adoption has been made, with a substantial portion of web traffic now secured by HTTPS:

Changes to Firefox security user experience
Up until now, Firefox has used a green lock icon in the address bar to indicate when a website is using HTTPS and a neutral indicator (no lock icon) when a website is not using HTTPS. The green lock icon indicates that the site is using a secure connection.

Address bar showing green lock at https://example.com

Current secure (HTTPS) connection

Address bar at example.com over HTTP

Current non-secure (HTTP) connection

In order to clearly highlight risk to the user, starting this month in Firefox 51 web pages which collect passwords but don’t use HTTPS will display a grey lock icon with a red strike-through in the address bar.

Control Center message when visiting an HTTP page with a Password field

Clicking on the “i” icon, will show the text, “Connection is Not Secure” and “Logins entered on this page could be compromised”.

This has been the user experience in Firefox Dev Edition since January 2016. Since then, the percentage of login forms detected by Firefox that are fully secured with HTTPS has increased from nearly 40% to nearly 70%, and the number of HTTPS pages overall has also increased by 10%, as you can see in the graph above.

In upcoming releases, Firefox will show an in-context message when a user clicks into a username or password field on a page that doesn’t use HTTPS.  That message will show the same grey lock icon with red strike-through, accompanied by a similar message, “This connection is not secure. Logins entered here could be compromised.”:

Login form with Username and Password field; Password field shows warning

In-context warning for a password field on a page that doesn’t use HTTPS

What to expect in the future
To continue to promote the use of HTTPS and properly convey the risks to users, Firefox will eventually display the struck-through lock icon for all pages that don’t use HTTPS, to make clear that they are not secure. As our plans evolve, we will continue to post updates but our hope is that all developers are encouraged by these changes to take the necessary steps to protect users of the Web through HTTPS.

For more technical details about this feature, please see our blog post from last year. In order to test your website before some of these changes are in the release version of Firefox, please install the latest version of Firefox Nightly.

Thanks!
Thank you to the engineering, user experience, user research, quality assurance, and product teams that helped make this happen – Sean Lee, Tim Guan-tin Chien, Paolo Amadini, Johann Hofmann, Jonathan Kingston, Dale Harvey, Ryan Feeley, Philipp Sackl, Tyler Downer, Adrian Florinescu, and Richard Barnes. And a very special thank you to Matthew Noorenberghe, without whom this would not have been possible.

Fixing an SVG Animation Vulnerability

At roughly 1:30pm Pacific time on November 30th, Mozilla released an update to Firefox containing a fix for a vulnerability reported as being actively used to deanonymize Tor Browser users.  Existing copies of Firefox should update automatically over the next 24 hours; users may also download the updated version manually.

Early on Tuesday, November 29th, Mozilla was provided with code for an exploit using a previously unknown vulnerability in Firefox.  The exploit was later posted to a public Tor Project mailing list by another individual.  The exploit took advantage of a bug in Firefox to allow the attacker to execute arbitrary code on the targeted system by having the victim load a web page containing malicious JavaScript and SVG code.  It used this capability to collect the IP and MAC address of the targeted system and report them back to a central server.  While the payload of the exploit would only work on Windows, the vulnerability exists on Mac OS and Linux as well.  Further details about the vulnerability and our fix will be released according to our disclosure policy.

The exploit in this case works in essentially the same way as the “network investigative technique” used by FBI to deanonymize Tor users (as FBI described it in an affidavit).  This similarity has led to speculation that this exploit was created by FBI or another law enforcement agency.  As of now, we do not know whether this is the case.  If this exploit was in fact developed and deployed by a government agency, the fact that it has been published and can now be used by anyone to attack Firefox users is a clear demonstration of how supposedly limited government hacking can become a threat to the broader Web.

Enforcing Content Security By Default within Firefox

Before loading a URI, Firefox enforces numerous content security checks verifying that web content can not perform malicious actions. As a first line of defense for example, Firefox enforces the Same-Origin policy (SOP) which prevents malicious script on one origin from obtaining access to sensitive content on another origin. Firefox’ content security checks also ensure that web content rendering on a user’s desktop or mobile device can not access local files. Additionally Firefox supports and enforces: Cross-Origin Resource Sharing (CORS), Mixed Content Blocking, Content Security Policy (CSP), Subresource Integrity (SRI), and applies many more mitigation strategies to prevent web content from performing malicious actions. We refer the reader to Mozilla’s HTTP Observatory to find more information on the latest content security concepts and how to configure a site safely and securely. Worth mentioning is that over time not only the web evolves but also content security mechanisms necessary to ensure an end user’s security and privacy and hence it’s vital to provide an API that new security features within a browser can rely on.

 

Enforcing Content Security Historically

The name of Firefox’ layout engine is Gecko which reads web content, such as HTML, CSS, JavaScript, etc. and renders it on the user’s screen. For loading resources over the internet, Firefox relies on the network library called Necko. Necko is a platform-independent API and provides functionality for several layers of networking, ranging from transport to presentation layers. For historical reasons, Necko was developed to be available as a standalone client. That separation also caused security checks to happen in Gecko rather than Necko and caused Necko to be agnostic about load context.
opt_in_gecko
As illustrated, Gecko performs all content security checks before resources are requested over the network through Necko. The downside of this legacy architecture is, that all the different subsystems in Gecko need to perform their own security checks before resources are requested over the network. For example, ImageLoader as well as ScriptLoader have to opt into the relevant security checks before initiating a GET request of the image or script to be loaded, respectively. Even though systematic security checks were always performed, those security checks were sprinkled throughout the codebase. Over time, various specifications for dynamically loading content have proven that such a scattered security model is error-prone.

 

Enforcing Content Security By Default

Instead of opting into security checks wherever resource loads are initiated throughout the codebase, we refactored Firefox so content security checks are enforced by default.
opt_out_gecko
As illustrated, we revamped the security landscape of Firefox providing an API that centralizes all the security checks within Necko. Instead of performing ad hoc security checks for each network request within Gecko, our implementation enables Gecko to provide information about the load context so Necko can perform the relevant security checks in a centralized manner. Whenever data (script, css, image,…) is about to be requested from the network, our technique creates and attaches an immutable loadinfo-object to every network request which remains assigned to a network load throughout the whole loading process and allows Firefox to provide the same security guarantees for resource loads that encounter a server-side redirect.
For an in-depth description of our implementation, which will be fully enforced within Firefox (v. 53), we refer the reader to our publication Enforcing Content Security By Default within Web Browsers (DOI 10.1109/SecDev.2016.8) which was presented at the IEEE International Conference on Cybersecurity Development on November 4th, 2016.

Distrusting New WoSign and StartCom Certificates

Mozilla has discovered that a Certificate Authority (CA) called WoSign has had a number of technical and management failures. Most seriously, we discovered they were backdating SSL certificates in order to get around the deadline that CAs stop issuing SHA-1 SSL certificates by January 1, 2016. Additionally, Mozilla discovered that WoSign had acquired full ownership of another CA called StartCom and failed to disclose this, as required by Mozilla policy. The representatives of WoSign and StartCom denied and continued to deny both of these allegations until sufficient data was collected to demonstrate that both allegations were correct. The levels of deception demonstrated by representatives of the combined company have led to Mozilla’s decision to distrust future certificates chaining up to the currently-included WoSign and StartCom root certificates.

Specifically, Mozilla is taking the following actions:

  1. Distrust certificates with a notBefore date after October 21, 2016 which chain up to the following affected roots. If additional back-dating is discovered (by any means) to circumvent this control, then Mozilla will immediately and permanently revoke trust in the affected roots.
    • This change will go into the Firefox 51 release train.
    • The code will use the following Subject Distinguished Names to identify the root certificates, so that the control will also apply to cross-certificates of these roots.
      • CN=CA 沃通根证书, OU=null, O=WoSign CA Limited, C=CN
      • CN=Certification Authority of WoSign, OU=null, O=WoSign CA Limited, C=CN
      • CN=Certification Authority of WoSign G2, OU=null, O=WoSign CA Limited, C=CN
        CN=CA WoSign ECC Root, OU=null, O=WoSign CA Limited, C=CN
      • CN=StartCom Certification Authority, OU=Secure Digital Certificate Signing, O=StartCom Ltd., C=IL
      • CN=StartCom Certification Authority G2, OU=null, O=StartCom Ltd., C=IL
  2. Add the previously identified backdated SHA-1 certificates chaining up to these affected roots to OneCRL.
  3. No longer accept audits carried out by Ernst & Young Hong Kong.
  4. Remove these affected root certificates from Mozilla’s root store at some point in the future. If the CA’s new root certificates are accepted for inclusion, then Mozilla may coordinate the removal date with the CA’s plans to migrate their customers to the new root certificates. Otherwise, Mozilla may choose to remove them at any point after March 2017.
  5. Mozilla reserves the right to take further or alternative action.

If you receive a certificate from one of these two CAs after October 21, 2016, your certificate will not validate in Mozilla products such as Firefox 51 and later, until these CAs provide new root certificates with different Subject Distinguished Names, and you manually import the root certificate that your certificate chains up to. Consumers of your website will also have to manually import the new root certificate until it is included by default in Mozilla’s root store.

Each of these CAs may re-apply for inclusion of new (replacement) root certificates as described in Bug #1311824 for WoSign, and Bug #1311832 for StartCom.

We believe that this response is consistent with Mozilla policy and is one which we could apply to any other CA that demonstrated similar levels of deception to circumvent Mozilla’s CA Certificate Policy, the CA/Browser Forum’s Baseline Requirements, and direct inquiries from Mozilla representatives.

Mozilla Security Team

Phasing Out SHA-1 on the Public Web

An algorithm we’ve depended on for most of the life of the Internet — SHA-1 — is aging, due to both mathematical and technological advances. Digital signatures incorporating the SHA-1 algorithm may soon be forgeable by sufficiently-motivated and resourceful entities.

Via our and others’ work in the CA/Browser Forum, following our deprecation plan announced last year and per recommendations by NIST, issuance of SHA-1 certificates mostly halted for the web last January, with new certificates moving to more secure algorithms. Since May 2016, the use of SHA-1 on the web fell from 3.5% to 0.8% as measured by Firefox Telemetry.

In early 2017, Firefox will show an overridable “Untrusted Connection” error whenever a SHA-1 certificate is encountered that chains up to a root certificate included in Mozilla’s CA Certificate Program. SHA-1 certificates that chain up to a manually-imported root certificate, as specified by the user, will continue to be supported by default; this will continue allowing certain enterprise root use cases, though we strongly encourage everyone to migrate away from SHA-1 as quickly as possible.

This policy has been included as an option in Firefox 51, and we plan to gradually ramp up its usage.  Firefox 51 is currently in Developer Edition, and is currently scheduled for release in January 2017. We intend to enable this deprecation of SHA-1 SSL certificates for a subset of Beta users during the beta phase for 51 (beginning November 7) to evaluate the impact of the policy on real-world usage. As we gain confidence, we’ll increase the number of participating Beta users. Once Firefox 51 is released in January, we plan to proceed the same way, starting with a subset of users and eventually disabling support for SHA-1 certificates from publicly-trusted certificate authorities in early 2017.

Questions about SHA-1 based certificates should be directed to the mozilla.dev.security.policy forum.

Update on add-on pinning vulnerability

Earlier this week, security researchers published reports that Firefox and Tor Browser were vulnerable to “man-in-the-middle” (MITM) attacks under special circumstances. Firefox automatically updates installed add-ons over an HTTPS connection. As a backup protection measure against mis-issued certificates, we also “pin” Mozilla’s web site certificates, so that even if an attacker manages to get an unauthorized certificate for our update site, they will not be able to tamper with add-on updates.

Due to flaws in the process we used to update “Preloaded Public Key Pinning” in our releases, the pinning for add-on updates became ineffective for Firefox release 48 starting September 10, 2016 and ESR 45.3.0 on September 3, 2016. As of those dates, an attacker who was able to get a mis-issued certificate for a Mozilla Web site could cause any user on a network they controlled to receive malicious updates for add-ons they had installed.

Users who have not installed any add-ons are not affected. However, Tor Browser contains add-ons and therefore all Tor Browser users are potentially vulnerable. We are not presently aware of any evidence that such malicious certificates exist in the wild and obtaining one would require hacking or compelling a Certificate Authority. However, this might still be a concern for Tor users who are trying to stay safe from state-sponsored attacks. The Tor Project released a security update to their browser early on Friday; Mozilla is releasing a fix for Firefox on Tuesday, September 20.

To help users who have not updated Firefox recently, we have also enabled Public Key Pinning Extension for HTTP (HPKP) on the add-on update servers. Firefox will refresh its pins during its daily add-on update check and users will be protected from attack after that point.

Firefox AddressSanitizer builds have been moved

This is a short announcement for all security researchers working on Firefox that use our pre-built AddressSanitzer (ASan) builds. Until recently, you could download these ASan builds from our FTP servers. Due to changes to our internal build infrastructure, these builds are no longer available from the usual location. Instead, they are available on a build system called TaskCluster. Most people just need the latest available build for testing purposes. Fortunately, this is easy to get:

Direct Download for Latest Firefox AddressSanitizer Build

For more advanced queries, TaskCluster offers a public API that can be used to interact with the system (e.g. to retrieve past builds). More information is available in the documentation.