Improving Malware Detection in Firefox

Sid Stamm

14

We are always looking for ways to help protect people better from the constant threat of malicious software. For years Firefox has utilized Google’s Safe Browsing phishing and malware protection to help keep you from accidentally visiting dangerous sites. This protection feature works by checking the sites that you visit against lists that Firefox downloads of reported phishing and malware sites. (For more details, check out this page.)

Firefox is about to get safer.

Until recently, we only had access to lists of reported malicious web sites, now the Safe Browsing service monitors malicious downloaded files too. The latest version of Firefox (as of July 22) will protect you from more malware by comparing files you download against these lists of malicious files, and blocking them from infecting your system.

The next version of Firefox (released in September) will prevent even more malicious downloads on Windows. When you download an application file, Firefox will verify the signature. If it is signed, Firefox then compares the signature with a list of known safe publishers. For files that are not identified by the lists as “safe” (allowed) or as “malware” (blocked), Firefox asks Google’s Safe Browsing service if the software is safe by sending it some of the download’s metadata. Note this online check will only be performed in Firefox on Windows for those downloaded files that don’t have a known good publisher. Most of the common and safe software for Windows is signed and so this final check won’t always need to happen.

In our preliminary testing, we estimate this new malware protection cuts the amount of malware that slips through Firefox’s protections in half. That’s a lot of malware that will be stopped in its tracks.

And of course if you don’t want to send Google data about the few downloads that don’t match these lists, you can turn off malware protection. But we believe eradicating malware is critical for most people, and expect this new feature to help work behind the scenes to keep you safe as you browse.

For more details, head on over to Monica’s blog post.

June is Internet Safety Month!

Sid Stamm

Happy Internet Safety Month, everyone!

In today’s world it is more critical than ever to be aware of security risks online. High-profile and broad attacks made news quite a bit in the last year. From the Heartbleed vulnerability to spikes in credit card theft and fraud, buzz about online privacy and security is on the rise. Even the White House has turned attention to cybersecurity.

The Ponemon Institute estimates 47% of Americans have had their personal information compromised! So now is a great time to do some routine maintenance this month and beef up your safety:

  1. Download a secure and private browser: A Web browser like Mozilla Firefox supports phishing and malware detection and protection from spyware, and also warns you about potentially fraudulent sites. Additionally, Firefox and most other Web browsers offer a “Do Not Track” feature that lets you opt to not have your information tracked by websites like advertising networks. For a sensitive browsing session you can temporarily enable private browsing which will help keep traces of that session hidden.
  2. Keep your software up-to-date: Ensure any software you download to your desktop or your mobile device – including apps and add-ons – stays updated.
  3. Secure your passwords: Create a unique password for each of your accounts by using a variety of upper and lowercase letters, numbers and punctuation marks. The new Firefox Sync makes it even easier to access your bookmarks, history, Awesome Bar intelligence, passwords, form-fill data and open tabs from Firefox running on other computers and mobile phones.
  4. Look for the S: Never purchase anything from a site that doesn’t have SSL/TLS encryption. You’ll know because the site’s address will start with https:// instead of http://. Additionally, never provide your credit card information via email.
  5. Don’t sacrifice security for mobility: Don’t just stop at your home computer. Extend your security to browsing from your mobile phone by making sure you’re always connected to a secure wireless network or your mobile provider’s 3G network before you enter any personal information or passwords or do any online shopping.

The Web is awesome. While we easily get lost in all the amazing stuff we can do online, this June the Anti-Phishing Working Group and National Cyber Security Alliance are encouraging people to Stop, Think, Connect to stay safe online.

Introducing Mozilla Winter of Security 2014

Curtis Koenig

20

At Mozilla, we have a loosely formed group called Security Automation, where people who build security tools can meet, exchange ideas, and show their work. We build projects around applications and operations security. Some of the things we’ve worked on include ZAP, Zest, Plug’n’Hack, Minion, MIG, Mozdef, ScanJS or Cipherscan. And, as you would expect from Mozilla, our work is public for all to see, use, and contribute to.

In the past, students requested to work on some of these projects. One trend we’ve seen is that many students are looking for real world projects to sink their teeth into. Something worth their attention, and something people will actually use.

In response we decided to create the Mozilla Winter of Security, or MWoS. MWoS is composed of 11 projects from the Security Automation effort, that directly map the needs at Mozilla. They are designed to solve real world problems in an innovative, and open way. We also made them autonomous, such that students don’t need to learn the inner-working of Mozilla in order to work on these projects.

Winter of Security, so called because we want students to get involved 800px-Redpandas4430roughly between September and April. Each project has an advisor from Mozilla who will be dedicating a few hours every week to the students. We also ask that a professor oversees the team from a university point of view, and ensures the projects align with their curriculum.
Anyone can apply, under the condition that the university will give class credits and a grade for the work done in MWoS.

MWoS is a win for all. Students get a chance to work on real-world security projects, under the guidance of an experienced security engineer. Professors get to implement cutting-edge security projects into their programs. Mozilla and the community get better security tools, which that we would not have the resources to build or improve ourselves.

If you are a professor, tell your students about the Mozilla Winter of Security today. If you are a student, start assembling your team, and fill up the application form before July 15th, 2014. We limited this round to 11 projects, with one team per project, and will be selecting the best applications in August.

If you have questions, and want to discuss MWoS, you can reach us on IRC in the #security channel, or via the discussion page on the wiki. If you want details about a specific project, feel free to contact the project advisor directly on IRC.

MWoS is part of the wider Mozilla Security Mentorship program.

Red Panda photograph from WikiMedia Commons under theCreative CommonsAttribution-Share Alike 3.0 Unported license.

Checking Compliance Status with Updated CA Certificate Policy

kwilson

In early 2013 Mozilla released version 2.1 of Mozilla’s CA Certificate Policy, which added a requirement for either the technical constraint or the audit of subordinate CA certificates, and requires CAs who issue SSL certificates to comply with the CA/Browser Forum Baseline Requirements. Then, in July, we updated Mozilla’s CA Certificate Enforcement Policy to make it clear that Mozilla will not tolerate misuse of publicly trusted certificates. CAs were given a grace period of just over one year to comply with the changes introduced in version 2.1 of the policy. So, today we sent an email to all Certificate Authorities (CAs) in Mozilla’s CA program to check on their progress.

The communication includes the following 5 action items for CAs.

  1. Ensure that Mozilla’s spreadsheet of included root certificates has the correct link to your most recent audit statement, and that the date of the audit statement is correct.
  2. Send Mozilla the link to your most recent Baseline Requirements audit statement.
  3. Test Mozilla’s new Certificate Verification library with your CA hierarchies and inform your customers of the upcoming changes as needed.
  4. Check your certificate issuance to confirm that no new certificates will be issued with the problems listed here.
  5. Send Mozilla information about your publicly disclosed subordinate CA certificates that chain up to certificates in Mozilla’s CA program, as per Items #8, 9, and 10 of Mozilla’s CA Certificate Inclusion Policy.

The full CA Communication is available here, and responses will be tabulated here.

We closed the communication by re-iterating that participation in Mozilla’s CA Certificate Program is at our sole discretion, and we will take whatever steps are necessary to keep our users safe. Nevertheless, we believe that the best approach to safeguard that security is to work with CAs as partners, to foster open and frank communication, and to be diligent in looking for ways to improve.

Mozilla Security Engineering Team

Hack in the Box HackWeekDay 2014

Paul Theriault

The Mozilla security team is proud to be once again sponsoring the Hack-in-the-Box HackWeekDay competition, this time at the Haxpo conference in Amsterdam, 28-30 May 2014. Come learn about Firefox OS, make apps to compete for great prizes and help shape the future of the mobile web.

This HackWeekDay event is the biggest yet, and will actually be run over the course of three separate days. There will daily prizes, and you can compete in as many days as you want:

  • Day 1: Firefox OS Homescreen & WebRTC applications
  • Day 2: Facebook Social/Parse APIs applications
  • Day 3: Combined app hacking competition – build on your apps from previous days, or come up with a new app, and compete for the grand prize of most 1337 app.

You can attend just one day, or compete in all three. For details of the prizes on offer and how to get involved, see the HackWeekDay page on the HITB website.

Firefox OS Homescreen & WebRTC applications

For the the first day (28th May), the competition will focus on creating Firefox OS App which incorporates one of the following themes:

  • Replaceable Homescreen: prototype new homescreen ideas for Firefox OS, and implement this using the new replaceable homescreen feature
  • WebRTC: prototype an app which uses WebRTC, taking advantage of WebRTC to access camera, microphone and/or peer-to-peer networking.

A group of Mozillians will be there to help developers test their entries on Firefox OS devices and award prizes (including the new Firefox OS Flame developer phones!)

Register Now!

How can people get prepared?

Interested developers who want to get started should get familiar with how to develop Apps for Firefox OS and learn about WebRTC:

If you want to develop on Firefox OS itself, see the Hacking on Gaia page on MDN.

What are we looking for in the entries?

  • Prototypes which effectively demonstrate a new or interesting feature
  • Innovative use of Web APIs
  • High quality execution (especially on the constraints of mobile)
  • Benefits for the security and privacy of Firefox OS users

What about others who can’t attend the conference ?

  • Make apps for the Mozilla marketplace
  • Volunteer to help Mozilla security team on Firefox OS (or anything else) (come find us on irc.mozilla.org#security or email security@mozilla.org)

Will the community be there?

Yes! Mozilla Nederland is planning a presence at the event.

 

$10,000 Security Bug Bounty for Certificate Verification

Daniel Veditz

2

Firefox developer builds (“Nightly“) are now using a new certificate verification library we’ve been working on for some time, and this code is on track to be released as part of Firefox 31 in July. As we’ve all been painfully reminded recently (Heartbleed, #gotofail) correct code in TLS libraries is crucial in today’s Internet and we want to make sure this code is rock solid before it ships to millions of Firefox users. To that end we’re excited to launch a special Security Bug Bounty program that will pay $10,000 for critical security flaws found and reported in this new code before the end of June.

To qualify for the special bounty the bug and reporter must first meet the guidelines of our normal security bug bounty program (first to file wins in case of duplicates, employees are not eligible, and so on). In addition, to qualify for the special bounty amount the vulnerability must:

  • be in, or caused by, code in security/pkix or security/certverifier as used in Firefox
  • be triggered through normal web browsing (for example “visit the attacker’s HTTPS site”)
  • be reported in enough detail, including testcases, certificates, or even a running proof of concept server, that we can reproduce the problem
  • be reported to us by 11:59pm June 30, 2014 (Pacific Daylight Time)

We are primarily interested in bugs that allow the construction of certificate chains that are accepted as valid when they should be rejected, and bugs in the new code that lead to exploitable memory corruption. Compatibility issues that cause Firefox to be unable to verify otherwise valid certificates will generally not be considered a security bug, but a bug that caused Firefox to accept forged signed OCSP responses would be.

Valid security bugs that don’t meet the specific parameters of this special program remain eligible for our usual $3000 Security Bug Bounty, of course.

To enter the program please file a security bug at https://bugzilla.mozilla.org/ and send the bug ID or link by mail to security@mozilla.org. If for some reason you cannot file a bug you can send all the details by email, but filing the bug yourself has a couple of advantages for you. First, you will automatically be involved in any discussions the developers have about your bug, and second, if there are multiple reports of the same vulnerability the earliest bug filed wins the bounty. If you wish to encrypt mail to us our key can be found at https://www.mozilla.org/security/#pgpkey.

Exciting Updates to Certificate Verification in Gecko

cviecco

9

Today we’re excited to announce a new certificate verification library for Mozilla Products – mozilla::pkix! While most users will not notice a difference, the new library is more robust and maintainable. The new code is more robust because certificate path building attempts all potential trust chains for a certificate before giving up (acknowledging the fact that the certificate space is a cyclic directed graph and not a forest). The new implementation is also more maintainable, with only 4,167 lines of C++ code compared to the previous 81,865 lines of code which had been auto-translated from Java to C. The new library benefits from C++ functionality such as memory cleanup tools (e.g., RAII).

To provide some more background, Gecko has historically used the certificate verification processing in NSS to ensure that the certificates presented during a TLS/SSL handshake is valid. NSS currently has two code paths for doing certificate verification: “classic” used by Gecko for Domain Validated (DV) certificate verification, and libPKIX used by Gecko for Extended Validation (EV) certificate verification. The NSS team has wanted to replace the “classic” verification with libPKIX for some time because libPKIX handles cross-signed certificates better and properly handles certificate policies required for Enhanced Validation (EV) certificates. However, libPKIX has proven to be very difficult to work with.

We also took the opportunity to enforce some requirements in Mozilla’s CA Certificate Policy and in the CA/Browser Forum’s Baseline Requirements (BRs). The changes are listed here. While we have performed extensive compatibility testing, it is possible that your website certificate will no longer validate with Firefox 31. This should not be a problem if you use a certificate issued by one of the CAs in Mozilla’s CA Program, because they should already be issuing certificates according to Mozilla’s CA Certificate Policy and the BRs. If you notice an issue due to any of these changes, please let us know.

We are looking for feedback with respect to compatibility and security. For compatibility, we ask all site operators and security testers to install Firefox 31 and use it to browse to your favorite sites. In addition, we ask for willing C++ programmers out there to review our code. This new mozilla::pkix library is located at security/pkix and security/certverifier. A more detailed description is here. If you find an issue, please help us make it better by filing a Bugzilla bug report.

We look forward to your feedback on this new certificate verification library.

Mozilla Security Engineering Team

Testing for Heartbleed vulnerability without exploiting the server.

dchan

7

Heartbleed is a serious vulnerability in OpenSSL that was disclosed on Tuesday, April 8th, and impacted any sites or services using OpenSSL 1.01 – 1.01.f and 1.0.2-beta1. Due to the nature of the bug, the only obvious way to test a server for the bug was an invasive attempt to retrieve memory–and this could lead to the compromise of sensitive data and/or potentially crash the service.

I developed a new test case that neither accesses sensitive data nor impacts service performance, and am posting the details here to help organizations conduct safe testing for Heartbleed vulnerabilities. While there is a higher chance of a false positive, this test should be safe to use against critical services.

The test works by observing a specification implementation error in vulnerable versions of OpenSSL: they respond to larger than allowed HeartbeatMessages.

Details:
OpenSSL was patched by commit 731f431. This patch addressed 2 implementation issues with the Heartbeat extension:

  1. HeartbeatRequest message specifying an erroneous payload length
  2. Total HeartbeatMessage length exceeding 2^14 (16,384 bytes)

Newer versions of OpenSSL silently discard messages which fall into the above categories. It is possible to detect older versions of OpenSSL by constructing a HeartbeatMessage and not sending padding bytes. This results in the below evaluating true:

/* Read type and payload length first */
if (1 + 2 + 16 > s->s3->rrec.length)
  return 0; /* silently discard */

Vulnerable versions of OpenSSL will respond to the request. However no server memory will be read because the client sent payload_length bytes.

False positives may occur when all the following conditions are met (but it is unlikely):

  1. The service uses a library other than OpenSSL
  2. The library supports the Heartbeat extension
  3. The service has Heartbeat enabled
  4. The library performs a fixed length padding check similar to OpenSSL

False negatives may occur when all the following conditions are met, and can be minimized by repeating the test:

  1. The service uses a vulnerable version of OpenSSL
  2. The Heartbeat request isn’t received by the testing client

I have modified the Metasploit openssl_heartbleed module to support the ‘check’ option.

You can download the updated module at
https://github.com/dchan/metasploit-framework/blob/master/modules/auxiliary/scanner/ssl/openssl_heartbleed.rb

We hope you can use this to test your servers and make sure any vulnerable ones get fixed!

David Chan
Mozilla Security Engineer

Heartbleed Security Advisory

Sid Stamm

14

Issue

OpenSSL is a widely-used cryptographic library which implements the TLS protocol and protects communications on the Internet. On April 7, 2014, a bug in OpenSSL known as “Heartbleed” was disclosed (CVE-2014-0160). This bug allows attackers to read portions of the affected server’s memory, potentially revealing data that the server did not intend to reveal.

Impact

Two Mozilla systems were affected by Heartbleed. Most Persona and Firefox Account (FxA) servers run in Amazon Web Services (AWS), and their encrypted TLS connections are terminated on AWS Elastic Load Balancers (ELBs) using OpenSSL. Until April 8, when Amazon resolved the bug in AWS, those ELBs used a version of OpenSSL vulnerable to the Heartbleed attack.

Because these TLS connections terminated on Amazon ELBs instead of the backend servers, the data that could have been exposed to potential attackers was limited to data on the ELBs: TLS private keys and the plaintext contents of encrypted messages in transit.

For the Persona service, this included the bearer tokens used to authenticate sessions to Persona infrastructure run by Mozilla (including the “fallback” Persona IdP service). Knowledge of these tokens could have allowed forgery of signed Persona certificates.

For the Firefox Account service, this included email addresses, derivatives of user passwords, session tokens, and key material (see the FxA protocol for details).

Raw passwords are never sent to the FxA account server. Neither the account server nor a potential attacker could have learned the password or the encryption key that protects Sync data.

Sensitive FxA authentication information is only transmitted during the initial login process. On subsequent messages, the session token is used as an HMAC key (in the HAWK protocol), and not delivered over the connection. This reduces the amount of secret material visible in ELB memory.

Status

We have no evidence that any of our servers or user data has been compromised, but the Heartbleed attack is very subtle and leaves no evidence by design. At this time, we do not know whether these attacks have been used against our infrastructure or not. We are taking this vulnerability very seriously and are working quickly to validate the extent of its impact.

Amazon has updated their ELB instances to fix the vulnerability. We have re-generated TLS keys for all production services, and revoked the possibly exposed keys and certificates. Subsequent sessions with Persona and Firefox Accounts are not vulnerable to the Heartbleed attack.

As a precaution, we have revoked all Persona bearer tokens, effectively signing all users out of Persona. The next time you use Persona you may need to re-enter your password.

Because Firefox Accounts session tokens are not used as bearer tokens, we believe it was unnecessary to revoke them.

Additional User Precautions

Although we have no evidence that any data was compromised, concerned users can take the following additional precautions:

  • Persona: if you have a fallback account, you can change the password. This will require you to re-enter your password, on each browser, the next time you use Persona.
  • Firefox Accounts (FxA): you can change your account password. This will invalidate existing sessions, requiring you to sign back into Sync on all your devices. Devices will not sync until you sign back in.
  • If you have used the same password on multiple sites or services, in order to protect yourself, you should change the password on all services.

Using FuzzDB for Testing Website Security

amuntner

After posting an introduction to FuzzDB I received the suggestion to write more detailed walkthroughs of the data files and how they could be used during black-box web application penetration testing. This article highlights some of my favorite FuzzDB files and discusses ways I’ve used them in the past.

If there are particular parts or usages of FuzzDB you’d like to see explored in a future blog post, let me know.

Exploiting Local File Inclusion

Scenario: While testing a website you identify a Local File Inclusion (LFI) vulnerability. Considering the various ways of exploiting LFI bugs, there are several pieces of required information that FuzzDB can help us to identify. (There is a nice cheatsheet here:  http://websec.wordpress.com/2010/02/22/exploiting-php-file-inclusion-overview/)

The first is directory traversal: How far to traverse? How do the characters have to be encoded to bypass possible defensive relative path traversal blacklists, a common but poor security mechanism employed by many applications?
FuzzDB contains an 8 directory deep set of Directory Traversal attack  patterns  using various exotic URL encoding mechanisms: https://code.google.com/p/fuzzdb/source/browse/trunk/attack-payloads/path-traversal/traversals-8-deep-exotic-encoding.txt

For example:

/%c0%ae%c0%ae\{FILE}

/%c0%ae%c0%ae\%c0%ae%c0%ae\{FILE}

/%c0%ae%c0%ae\%c0%ae%c0%ae\%c0%ae%c0%ae/{FILE}

In your fuzzer, you’d replace {FILE} with a known file location appropriate to the type of system you’re testing, such as the string “etc/password” (for a UNIX system target) then review the output of the returned request responses to find responses indicating success, ie, that the targeted file has been successfully retrieved. In terms of workflow, try sorting the responses by number of bytes returned, the successful response will most become immediately apparent.

The cheatsheet discusses a method of including injected PHP code, but in order to do this, you need to be able to write to the server’s disk. Two places that the HTTPD daemon typically would have write permissions are the access and error logs.  FuzzDB contains a file of common location for HTTP server log files culled from popular distribution packages. After finding a working traversal string, configure your fuzzer to try these file locations, appended to the previously located working directory path:

https://code.google.com/p/fuzzdb/source/browse/trunk/attack-payloads/lfi/common-unix-httpd-log-locations.txt

Fuzzing for Unknown Methods

Improper Authorization occurs when an application doesn’t validate whether the current user context has permission to perform the requested command. One common presentation is in applications which utilize role-based access control, where the application uses the current user’s role in order to determine which menu options to display, but never validates that the chosen option is within the current user’s allowed permissions set. Using the application normally, a user would be unlikely to be able to select an option they weren’t allowed to use because it would never be presented. If an attacker were to learn these methods, they’d be able to exceed the expected set of permissions for their user role.
Many applications use human-readable values for application methods passed in parameters. FuzzDB contains list of common web method names can be fuzzed in an attempt to find functionality that may be available to the user but is not displayed by any menu.

https://code.google.com/p/fuzzdb/source/browse/trunk/attack-payloads/BizLogic/CommonMethods.fuzz.txt

These methods can be injected wherever you see others being passed, such as in GET and POST request parameter values, cookies, serialized requests, REST urls, and with web services.

Protip: In addition to this targeted brute-force approach it can also be useful to look inside the site’s Javascript files. If the site designers have deployed monolithic script files that are downloaded by all users regardless of permissions where the application pages displayed to a user only call the functions that are permitted for the current user role, you can sometimes find endpoints and methods that you haven’t observed while crawling the site.

Leftover Debug Functionality

Software sometimes gets accidentally deployed with leftover debug code. When triggered, the results can range from seeing extended error messages that reveal sensitive information about the application state or configuration that can be useful for helping to plan further attacks to bypassing authentication and/or authorization, or to displaying additional test functionality that could violate the integrity or confidentiality of data in ways that the developers didn’t intend to occur in a production scenario.

FuzzDB contains a list of debug parameters that have been observed in bug reports, in my own experience, and some which are totally hypothesized by me but realistic:
https://code.google.com/p/fuzzdb/source/browse/trunk/attack-payloads/BizLogic/DebugParams.fuzz.txt

Sample file content:

admin=1

admin=true

admin=y

admin=yes

adm=true

adm=y

adm=yes

dbg=1

dbg=true

dbg=y

dbg=yes

debug=1

debug=true

debug=y

debug=yes

“1” “true” “y” and “yes” are the most common values I’ve seen. If you observe a different but consistent scheme in use in the application you’re assessing, plug that in.

In practice, I’ve had luck using them as name/value pairs for GET requests, POST requests, as cookie name/value pairs, and inside serialized requests in order to elicit a useful response (for the tester) from the server.

Predictable File Locations

Application installer packages place components into known, predictable locations. FuzzDB contains lists of known file locations for many popular web servers and applications
https://code.google.com/p/fuzzdb/source/browse/trunk/#trunk%2Fdiscovery%2FPredictableRes

Example: You identify that the server you’re testing is running Apache Tomcat. A list of common locations for interesting default Tomcat files is used to identify information leakage and additional attackable functionality. https://code.google.com/p/fuzzdb/source/browse/trunk/discovery/PredictableRes/ApacheTomcat.fuzz.txt

Example: A directory called /admin is located. Sets of files are deployed which will aid in identifying resources likely to be in such a directory.

https://code.google.com/p/fuzzdb/source/browse/trunk/discovery/PredictableRes/Logins.fuzz.txt

Forcible Browsing for Potentially Interesting Files

Certain operating systems and file editors can inadvertently leave backup copies of sensitive files. This can end up revealing source code, pages without any inbound links, credentials, compressed backup files, and who knows what.
FuzzDB contains hundreds of common file extensions including one hundred eighty six compressed file format extensions, extensions commonly used for backup versions of files, and a set of primitives of “COPY OF” as can be prepended to filenames by Windows servers.

https://code.google.com/p/fuzzdb/source/browse/#svn%2Ftrunk%2Fdiscovery%2FFilenameBruteforce

In practice, you’d use these lists in your fuzzer in combination with filenames and paths discovered while crawling the targeted application.

Upcoming posts will discuss other usage scenarios.