Hack in the Box HackWeekDay 2014

Paul Theriault

The Mozilla security team is proud to be once again sponsoring the Hack-in-the-Box HackWeekDay competition, this time at the Haxpo conference in Amsterdam, 28-30 May 2014. Come learn about Firefox OS, make apps to compete for great prizes and help shape the future of the mobile web.

This HackWeekDay event is the biggest yet, and will actually be run over the course of three separate days. There will daily prizes, and you can compete in as many days as you want:

  • Day 1: Firefox OS Homescreen & WebRTC applications
  • Day 2: Facebook Social/Parse APIs applications
  • Day 3: Combined app hacking competition – build on your apps from previous days, or come up with a new app, and compete for the grand prize of most 1337 app.

You can attend just one day, or compete in all three. For details of the prizes on offer and how to get involved, see the HackWeekDay page on the HITB website.

Firefox OS Homescreen & WebRTC applications

For the the first day (28th May), the competition will focus on creating Firefox OS App which incorporates one of the following themes:

  • Replaceable Homescreen: prototype new homescreen ideas for Firefox OS, and implement this using the new replaceable homescreen feature
  • WebRTC: prototype an app which uses WebRTC, taking advantage of WebRTC to access camera, microphone and/or peer-to-peer networking.

A group of Mozillians will be there to help developers test their entries on Firefox OS devices and award prizes (including the new Firefox OS Flame developer phones!)

Register Now!

How can people get prepared?

Interested developers who want to get started should get familiar with how to develop Apps for Firefox OS and learn about WebRTC:

If you want to develop on Firefox OS itself, see the Hacking on Gaia page on MDN.

What are we looking for in the entries?

  • Prototypes which effectively demonstrate a new or interesting feature
  • Innovative use of Web APIs
  • High quality execution (especially on the constraints of mobile)
  • Benefits for the security and privacy of Firefox OS users

What about others who can’t attend the conference ?

  • Make apps for the Mozilla marketplace
  • Volunteer to help Mozilla security team on Firefox OS (or anything else) (come find us on irc.mozilla.org#security or email security@mozilla.org)

Will the community be there?

Yes! Mozilla Nederland is planning a presence at the event.

 

$10,000 Security Bug Bounty for Certificate Verification

Daniel Veditz

2

Firefox developer builds (“Nightly“) are now using a new certificate verification library we’ve been working on for some time, and this code is on track to be released as part of Firefox 31 in July. As we’ve all been painfully reminded recently (Heartbleed, #gotofail) correct code in TLS libraries is crucial in today’s Internet and we want to make sure this code is rock solid before it ships to millions of Firefox users. To that end we’re excited to launch a special Security Bug Bounty program that will pay $10,000 for critical security flaws found and reported in this new code before the end of June.

To qualify for the special bounty the bug and reporter must first meet the guidelines of our normal security bug bounty program (first to file wins in case of duplicates, employees are not eligible, and so on). In addition, to qualify for the special bounty amount the vulnerability must:

  • be in, or caused by, code in security/pkix or security/certverifier as used in Firefox
  • be triggered through normal web browsing (for example “visit the attacker’s HTTPS site”)
  • be reported in enough detail, including testcases, certificates, or even a running proof of concept server, that we can reproduce the problem
  • be reported to us by 11:59pm June 30, 2014 (Pacific Daylight Time)

We are primarily interested in bugs that allow the construction of certificate chains that are accepted as valid when they should be rejected, and bugs in the new code that lead to exploitable memory corruption. Compatibility issues that cause Firefox to be unable to verify otherwise valid certificates will generally not be considered a security bug, but a bug that caused Firefox to accept forged signed OCSP responses would be.

Valid security bugs that don’t meet the specific parameters of this special program remain eligible for our usual $3000 Security Bug Bounty, of course.

To enter the program please file a security bug at https://bugzilla.mozilla.org/ and send the bug ID or link by mail to security@mozilla.org. If for some reason you cannot file a bug you can send all the details by email, but filing the bug yourself has a couple of advantages for you. First, you will automatically be involved in any discussions the developers have about your bug, and second, if there are multiple reports of the same vulnerability the earliest bug filed wins the bounty. If you wish to encrypt mail to us our key can be found at https://www.mozilla.org/security/#pgpkey.

Exciting Updates to Certificate Verification in Gecko

cviecco

9

Today we’re excited to announce a new certificate verification library for Mozilla Products – mozilla::pkix! While most users will not notice a difference, the new library is more robust and maintainable. The new code is more robust because certificate path building attempts all potential trust chains for a certificate before giving up (acknowledging the fact that the certificate space is a cyclic directed graph and not a forest). The new implementation is also more maintainable, with only 4,167 lines of C++ code compared to the previous 81,865 lines of code which had been auto-translated from Java to C. The new library benefits from C++ functionality such as memory cleanup tools (e.g., RAII).

To provide some more background, Gecko has historically used the certificate verification processing in NSS to ensure that the certificates presented during a TLS/SSL handshake is valid. NSS currently has two code paths for doing certificate verification: “classic” used by Gecko for Domain Validated (DV) certificate verification, and libPKIX used by Gecko for Extended Validation (EV) certificate verification. The NSS team has wanted to replace the “classic” verification with libPKIX for some time because libPKIX handles cross-signed certificates better and properly handles certificate policies required for Enhanced Validation (EV) certificates. However, libPKIX has proven to be very difficult to work with.

We also took the opportunity to enforce some requirements in Mozilla’s CA Certificate Policy and in the CA/Browser Forum’s Baseline Requirements (BRs). The changes are listed here. While we have performed extensive compatibility testing, it is possible that your website certificate will no longer validate with Firefox 31. This should not be a problem if you use a certificate issued by one of the CAs in Mozilla’s CA Program, because they should already be issuing certificates according to Mozilla’s CA Certificate Policy and the BRs. If you notice an issue due to any of these changes, please let us know.

We are looking for feedback with respect to compatibility and security. For compatibility, we ask all site operators and security testers to install Firefox 31 and use it to browse to your favorite sites. In addition, we ask for willing C++ programmers out there to review our code. This new mozilla::pkix library is located at security/pkix and security/certverifier. A more detailed description is here. If you find an issue, please help us make it better by filing a Bugzilla bug report.

We look forward to your feedback on this new certificate verification library.

Mozilla Security Engineering Team

Testing for Heartbleed vulnerability without exploiting the server.

dchan

7

Heartbleed is a serious vulnerability in OpenSSL that was disclosed on Tuesday, April 8th, and impacted any sites or services using OpenSSL 1.01 – 1.01.f and 1.0.2-beta1. Due to the nature of the bug, the only obvious way to test a server for the bug was an invasive attempt to retrieve memory–and this could lead to the compromise of sensitive data and/or potentially crash the service.

I developed a new test case that neither accesses sensitive data nor impacts service performance, and am posting the details here to help organizations conduct safe testing for Heartbleed vulnerabilities. While there is a higher chance of a false positive, this test should be safe to use against critical services.

The test works by observing a specification implementation error in vulnerable versions of OpenSSL: they respond to larger than allowed HeartbeatMessages.

Details:
OpenSSL was patched by commit 731f431. This patch addressed 2 implementation issues with the Heartbeat extension:

  1. HeartbeatRequest message specifying an erroneous payload length
  2. Total HeartbeatMessage length exceeding 2^14 (16,384 bytes)

Newer versions of OpenSSL silently discard messages which fall into the above categories. It is possible to detect older versions of OpenSSL by constructing a HeartbeatMessage and not sending padding bytes. This results in the below evaluating true:

/* Read type and payload length first */
if (1 + 2 + 16 > s->s3->rrec.length)
  return 0; /* silently discard */

Vulnerable versions of OpenSSL will respond to the request. However no server memory will be read because the client sent payload_length bytes.

False positives may occur when all the following conditions are met (but it is unlikely):

  1. The service uses a library other than OpenSSL
  2. The library supports the Heartbeat extension
  3. The service has Heartbeat enabled
  4. The library performs a fixed length padding check similar to OpenSSL

False negatives may occur when all the following conditions are met, and can be minimized by repeating the test:

  1. The service uses a vulnerable version of OpenSSL
  2. The Heartbeat request isn’t received by the testing client

I have modified the Metasploit openssl_heartbleed module to support the ‘check’ option.

You can download the updated module at
https://github.com/dchan/metasploit-framework/blob/master/modules/auxiliary/scanner/ssl/openssl_heartbleed.rb

We hope you can use this to test your servers and make sure any vulnerable ones get fixed!

David Chan
Mozilla Security Engineer

Heartbleed Security Advisory

Sid Stamm

14

Issue

OpenSSL is a widely-used cryptographic library which implements the TLS protocol and protects communications on the Internet. On April 7, 2014, a bug in OpenSSL known as “Heartbleed” was disclosed (CVE-2014-0160). This bug allows attackers to read portions of the affected server’s memory, potentially revealing data that the server did not intend to reveal.

Impact

Two Mozilla systems were affected by Heartbleed. Most Persona and Firefox Account (FxA) servers run in Amazon Web Services (AWS), and their encrypted TLS connections are terminated on AWS Elastic Load Balancers (ELBs) using OpenSSL. Until April 8, when Amazon resolved the bug in AWS, those ELBs used a version of OpenSSL vulnerable to the Heartbleed attack.

Because these TLS connections terminated on Amazon ELBs instead of the backend servers, the data that could have been exposed to potential attackers was limited to data on the ELBs: TLS private keys and the plaintext contents of encrypted messages in transit.

For the Persona service, this included the bearer tokens used to authenticate sessions to Persona infrastructure run by Mozilla (including the “fallback” Persona IdP service). Knowledge of these tokens could have allowed forgery of signed Persona certificates.

For the Firefox Account service, this included email addresses, derivatives of user passwords, session tokens, and key material (see the FxA protocol for details).

Raw passwords are never sent to the FxA account server. Neither the account server nor a potential attacker could have learned the password or the encryption key that protects Sync data.

Sensitive FxA authentication information is only transmitted during the initial login process. On subsequent messages, the session token is used as an HMAC key (in the HAWK protocol), and not delivered over the connection. This reduces the amount of secret material visible in ELB memory.

Status

We have no evidence that any of our servers or user data has been compromised, but the Heartbleed attack is very subtle and leaves no evidence by design. At this time, we do not know whether these attacks have been used against our infrastructure or not. We are taking this vulnerability very seriously and are working quickly to validate the extent of its impact.

Amazon has updated their ELB instances to fix the vulnerability. We have re-generated TLS keys for all production services, and revoked the possibly exposed keys and certificates. Subsequent sessions with Persona and Firefox Accounts are not vulnerable to the Heartbleed attack.

As a precaution, we have revoked all Persona bearer tokens, effectively signing all users out of Persona. The next time you use Persona you may need to re-enter your password.

Because Firefox Accounts session tokens are not used as bearer tokens, we believe it was unnecessary to revoke them.

Additional User Precautions

Although we have no evidence that any data was compromised, concerned users can take the following additional precautions:

  • Persona: if you have a fallback account, you can change the password. This will require you to re-enter your password, on each browser, the next time you use Persona.
  • Firefox Accounts (FxA): you can change your account password. This will invalidate existing sessions, requiring you to sign back into Sync on all your devices. Devices will not sync until you sign back in.
  • If you have used the same password on multiple sites or services, in order to protect yourself, you should change the password on all services.

Using FuzzDB for Testing Website Security

amuntner

After posting an introduction to FuzzDB I received the suggestion to write more detailed walkthroughs of the data files and how they could be used during black-box web application penetration testing. This article highlights some of my favorite FuzzDB files and discusses ways I’ve used them in the past.

If there are particular parts or usages of FuzzDB you’d like to see explored in a future blog post, let me know.

Exploiting Local File Inclusion

Scenario: While testing a website you identify a Local File Inclusion (LFI) vulnerability. Considering the various ways of exploiting LFI bugs, there are several pieces of required information that FuzzDB can help us to identify. (There is a nice cheatsheet here:  http://websec.wordpress.com/2010/02/22/exploiting-php-file-inclusion-overview/)

The first is directory traversal: How far to traverse? How do the characters have to be encoded to bypass possible defensive relative path traversal blacklists, a common but poor security mechanism employed by many applications?
FuzzDB contains an 8 directory deep set of Directory Traversal attack  patterns  using various exotic URL encoding mechanisms: https://code.google.com/p/fuzzdb/source/browse/trunk/attack-payloads/path-traversal/traversals-8-deep-exotic-encoding.txt

For example:

/%c0%ae%c0%ae\{FILE}

/%c0%ae%c0%ae\%c0%ae%c0%ae\{FILE}

/%c0%ae%c0%ae\%c0%ae%c0%ae\%c0%ae%c0%ae/{FILE}

In your fuzzer, you’d replace {FILE} with a known file location appropriate to the type of system you’re testing, such as the string “etc/password” (for a UNIX system target) then review the output of the returned request responses to find responses indicating success, ie, that the targeted file has been successfully retrieved. In terms of workflow, try sorting the responses by number of bytes returned, the successful response will most become immediately apparent.

The cheatsheet discusses a method of including injected PHP code, but in order to do this, you need to be able to write to the server’s disk. Two places that the HTTPD daemon typically would have write permissions are the access and error logs.  FuzzDB contains a file of common location for HTTP server log files culled from popular distribution packages. After finding a working traversal string, configure your fuzzer to try these file locations, appended to the previously located working directory path:

https://code.google.com/p/fuzzdb/source/browse/trunk/attack-payloads/lfi/common-unix-httpd-log-locations.txt

Fuzzing for Unknown Methods

Improper Authorization occurs when an application doesn’t validate whether the current user context has permission to perform the requested command. One common presentation is in applications which utilize role-based access control, where the application uses the current user’s role in order to determine which menu options to display, but never validates that the chosen option is within the current user’s allowed permissions set. Using the application normally, a user would be unlikely to be able to select an option they weren’t allowed to use because it would never be presented. If an attacker were to learn these methods, they’d be able to exceed the expected set of permissions for their user role.
Many applications use human-readable values for application methods passed in parameters. FuzzDB contains list of common web method names can be fuzzed in an attempt to find functionality that may be available to the user but is not displayed by any menu.

https://code.google.com/p/fuzzdb/source/browse/trunk/attack-payloads/BizLogic/CommonMethods.fuzz.txt

These methods can be injected wherever you see others being passed, such as in GET and POST request parameter values, cookies, serialized requests, REST urls, and with web services.

Protip: In addition to this targeted brute-force approach it can also be useful to look inside the site’s Javascript files. If the site designers have deployed monolithic script files that are downloaded by all users regardless of permissions where the application pages displayed to a user only call the functions that are permitted for the current user role, you can sometimes find endpoints and methods that you haven’t observed while crawling the site.

Leftover Debug Functionality

Software sometimes gets accidentally deployed with leftover debug code. When triggered, the results can range from seeing extended error messages that reveal sensitive information about the application state or configuration that can be useful for helping to plan further attacks to bypassing authentication and/or authorization, or to displaying additional test functionality that could violate the integrity or confidentiality of data in ways that the developers didn’t intend to occur in a production scenario.

FuzzDB contains a list of debug parameters that have been observed in bug reports, in my own experience, and some which are totally hypothesized by me but realistic:
https://code.google.com/p/fuzzdb/source/browse/trunk/attack-payloads/BizLogic/DebugParams.fuzz.txt

Sample file content:

admin=1

admin=true

admin=y

admin=yes

adm=true

adm=y

adm=yes

dbg=1

dbg=true

dbg=y

dbg=yes

debug=1

debug=true

debug=y

debug=yes

“1” “true” “y” and “yes” are the most common values I’ve seen. If you observe a different but consistent scheme in use in the application you’re assessing, plug that in.

In practice, I’ve had luck using them as name/value pairs for GET requests, POST requests, as cookie name/value pairs, and inside serialized requests in order to elicit a useful response (for the tester) from the server.

Predictable File Locations

Application installer packages place components into known, predictable locations. FuzzDB contains lists of known file locations for many popular web servers and applications
https://code.google.com/p/fuzzdb/source/browse/trunk/#trunk%2Fdiscovery%2FPredictableRes

Example: You identify that the server you’re testing is running Apache Tomcat. A list of common locations for interesting default Tomcat files is used to identify information leakage and additional attackable functionality. https://code.google.com/p/fuzzdb/source/browse/trunk/discovery/PredictableRes/ApacheTomcat.fuzz.txt

Example: A directory called /admin is located. Sets of files are deployed which will aid in identifying resources likely to be in such a directory.

https://code.google.com/p/fuzzdb/source/browse/trunk/discovery/PredictableRes/Logins.fuzz.txt

Forcible Browsing for Potentially Interesting Files

Certain operating systems and file editors can inadvertently leave backup copies of sensitive files. This can end up revealing source code, pages without any inbound links, credentials, compressed backup files, and who knows what.
FuzzDB contains hundreds of common file extensions including one hundred eighty six compressed file format extensions, extensions commonly used for backup versions of files, and a set of primitives of “COPY OF” as can be prepended to filenames by Windows servers.

https://code.google.com/p/fuzzdb/source/browse/#svn%2Ftrunk%2Fdiscovery%2FFilenameBruteforce

In practice, you’d use these lists in your fuzzer in combination with filenames and paths discovered while crawling the targeted application.

Upcoming posts will discuss other usage scenarios.

Update on Plugin Activation

Chad Weiner

20

To provide a better and safer experience on the Web, we have been working to move Firefox away from plugins.

After much testing and iteration, we determined that Firefox would no longer activate most plugins by default and instead opted to let people choose when to enable plugins on sites they visit. We call this feature in Firefox click-to-play plugins.

We strongly encourage site authors to phase out their use of plugins. The power of the Web itself, especially with new technologies like emscripten and asm.js, makes plugins much less essential than they once were. Plus, plugins present real costs to Firefox users. Though people may not always realize it, we know plugins are a significant source of poor performance, crashes and security vulnerabilities.

Developers will increasingly find what they need in the Web platform, but we also recognize that it will take some time for them to migrate to better options. Also, we know there are plugins that our users rely on for essential tasks and we want to provide plugin authors and developers with a short-term exemption from our default click-to-play approach. Today, we’re announcing the creation of a temporary plugin whitelist.

Any plugin author can submit an application to be considered for inclusion on the whitelist by following the steps outlined in our plugin whitelist policy. Most importantly, we are asking for authors to demonstrate a credible plan for moving away from NPAPI-based plugins and towards standards-based Web solutions.

Today marks the beginning of an application window that will run until March 31, 2014. Any plugin author’s application received before the deadline will be reviewed and processed before click-to-play is activated by default in Firefox. Whitelisted status will be granted for four consecutive Firefox releases and authors may reapply for continued exemption as the end of the grace period draws near.

Our vision is clear: a powerful and open Web that runs everywhere without the need for special purpose plugins. The steps outlined here, will move us towards that vision, while still balancing today’s realities.

– Chad Weiner, Director of Product Management

Mozilla Security @ BSidesVancouver and CanSecWest

yboily

This year Mozilla will be sponsoring BSidesVancouver, a free community oriented event on March 10th & 11th in Vancouver, BC. This event is very much in the spirit of the Mozilla community and mission, and several of our security team members will be attending both BSidesVancouver and CanSecWest.

In addition to our team members attending the event, Jeff Bryner and Curtis Koenig will be speaking at the event about some aspects of the security processes and technologies that Mozilla uses and has built. If you are going to be at these events and would like to connect with us at BSidesVancouver or CanSecWest, send us a message at security@mozilla.org, or reach out to us on Twitter (@mozsec).

Reporting Web Vulnerabilities to Mozilla using Zest

Simon Bennetts

Overview

We always want to hear about potential vulnerabilities in our software, and have a long running Bug Bounty program to reward those who find serious security bugs.

However we sometimes receive bug notifications for vulnerabilities in our websites that are difficult to reproduce.

This is one of the reasons why we developed Zest: a security scripting language.

We would like to encourage everyone to submit vulnerability reports for server side web applications using Zest. There are plans for Zest to also handle client side vulnerabilities in the future.

Introducing Zest

Zest is an experimental specialized scripting language (also known as a domain-specific language) developed by the Mozilla security team and is intended to be used in web oriented security tools.

Zest scripts are defined in JSON, but they are designed to be represented visually in security tools.

Zest is completely free, open source and can be included in any tool whether open or closed, free or commercial.

Creating a simple Zest script using ZAP

To demonstrate how to create a Zest script we will use the OWASP Zed Attack Proxy (ZAP) which has built in support for Zest.

ZAP is an intercepting proxy, so you will need to configure your browser to proxy through ZAP. Details of how to do this are included in the ZAP help file, but if you are unsure of how to do this then you can also use Plug-n-Hack as described in the next session, as this will configure your browser for you.

The latest version of the Zest add-on for ZAP provides a toolbar button for quickly recording Zest scripts:

If this button is not shown then click on the “Manage Add-ons’ button on the tool bar (the 3 stacked blocks), click the ‘Check for updates’ button and update the Zest add-on.

Note that while Zest is included with ZAP, there is a known problem whereby Zest support can get removed from ZAP after an update, so if Zest is not included in the list of installed add-ons then select the Marketplace tab, find and select the Zest add-on and install it from there. In either case you should restart ZAP.

Clicking on the “Record a new Zest script…” button will open a dialog for creating a new Zest script. You only need give the script a title, but if you also select (or type) a prefix from the pull down list of sites you have already accessed then ZAP will only record requests with that prefix. The toolbar button will turn red to indicate you are recording and stay pressed.

Now use your browser to reproduce the server side vulnerability that you wish to report.

When you have finished click the toolbar button again to stop recording. The button will turn black.

If you now look at ZAP you should see a graphical representation of the script you have just recorded in the script in the ‘Scripts’ tab and the JSON representation in the ‘Script Console’ tab:

Creating a simple Zest script using ZAP and PnH

If you are new to ZAP then an alternative approach is to use Plug-n-Hack (PnH) another initiative from the Mozilla Security Team, and covered by another blog post.

To configure Firefox to use ZAP just click on the ‘Plug-n-Hack’ button on the ZAP ‘Quick Start’ tab:

Install the Plug-n-Hack Firefox Add-on and accept all of the dialogs. Note that we recommend that you use a separate Firefox profile for security testing.

Your browser should now be proxying via ZAP – try visiting some sites and verify that they appear in the ZAP ‘History’ tab.

You can now record a Zest script as above, but you can also control both PnH and ZAP via the Firefox Developer Toolbar.

Use ‘Shift F2′ to access the Developer Toolbar and then type ‘zap’ – you should see a list of commands like:

To record a Zest script in ZAP select (or type) the following command:

zap record on global

Now use your browser to reproduce the server side vulnerability that you wish to report.

When you have finished select (or type) the following command:

zap record off global

The Zest script will now be visible in the ‘Scripts’ tab and the JSON representation in the ‘Script Console’ tab as above.

Reporting the bug

To report the issue please file a bug in bugzilla, clearly describing the problem as you understand it.

Check that your Zest script does not contain any personal data, then save it to disk using the ‘Save Script..’ button in the ‘Scripts’ tab.

This script includes the data that you sent and received in your browser while you were recording, which allows us to see exactly what you did and what the result was. This makes reproducing potential vulnerabilities much easier.

Attach this file, which contains the JSON version of your script, via the Web Bounty Form.

For more details about how to submit security bugs see the ‘Process’ section of the Bug Bounty page.

You can just attach the script as is, but you may also want to edit it before submitting it to us.

Editing Zest scripts in ZAP

You can double click on any Zest node in the tree and edit it. You can also right click on nodes to delete them. This means that you can easily remove requests that are unrelated to the problem you are reporting.

If you select a Zest Request node then the related request and response will be shown in the ZAP ‘Request’ and ‘Response’ tabs.

You can also redact strings in the responses that you dont want to include, for example session cookies and passwords.

To do this select the relevant request and then select the ‘Response’ tab.

Find and highlight the relevant string, right click on it and select ‘Redact Text…’

This will cause the a dialog to be shown which allows you to specify the replacement string (default 5 ‘block’ characters) and an option to ‘Apply to all current requests’ which will cause the string to be replaced everywhere it appears in the script.

Running Zest scripts in ZAP

You can run Zest scripts in ZAP via the ‘Run’ button in the ‘Script Console’ tab.

Note that this is only enabled for ‘stand alone’ scripts – ZAP supports many other types of scripts which are integrated with ZAP features like the active scanner and therefore cannot be run independently.

When you run your script you will see the requests and responses shown in the ‘Zest Results’ tab.

You may see that some requests are flagged as failing.

This is because by default ZAP adds 2 assertions to each request – these check that the status code matches and that the response length is the same as before, plus or minus 2%. You can remove or change these assertions and add new ones if you like, all via right click menus.

You can compare new results with the previous ones by right clicking the request in the ‘Zest results’ tab and selecting ‘Zest: Compare with original response':

Creating Advanced Zest Scripts in ZAP

You can add new requests to a Zest script by right clicking on any request in ZAP and selecting ‘Add to Zest Script':

Zest supports other types of statements, including:

  • Conditionals
  • Loops
  • Assignments
  • Actions
  • Controls

These can all be added via right clicking on the Zest tree nodes:

These statements allow very powerful scripts to be created quickly and easily.

ZAP also adds useful features, such as automatically adding assign statements to handle any anti CSRF tokens if detects.

For more information about these statements see the Zest pages on MDN.

Demo

I demoed Zest at AppSec USA in November 2013, and the full video of my talk is available on YouTube. The Zest part of the talk starts at 23:47.

Feedback

Zest is still at an early stage of development and all constructive feedback is very welcome.

Anyone can contribute to the onward development of Zest, and teams or individuals who develop security tools are especially welcome to join and help shape Zest’s future.

The Zest code is on GitHub and there is a Google Group for discussing everything about Zest.

On the X-Frame-Options Security Header

Frederik Braun

7

A few weeks ago, Mario Heiderich and I published a white paper about the X-Frame-Options security header. In this blog post, I want to summarize the key arguments for settings this security header in your web application.

X-Frame-Options is an optional HTTP response header that was introduced in 2008 and found its first implementation in Internet Explorer 8. Setting this header in your web application defines if it works within a frame element (e.g., iframe). The syntax for this header provides three options, ALLOW-FROM, DENY or SAMEORIGIN. Not sending this header implies allowing frames in general. ALLOW-FROM, however, allows whitelisting a specific origin. The opposite is, of course, DENY which means that no website is ever allowed to display your website in a frame. A common middle ground is to send SAMEORIGIN. This means that only websites of the same origin may frame it.

This blog post will highlight some attacks than can be thwarted by forbidding the framing of your document. First of all, Clickjacking. This term has gained major attention in 2008 and includes a multitude of techniques in which an evil web page can secretly include yours in a frame. But the author of this evil website will make your website transparent and present buttons on top of it. Anyone visiting this evil page will then click on something seemingly unrelated, which will actually result in mouse clicks in your web application.

A wide class of attacks on other websites leverage missing security features in the browser. Most modern browsers provide hardened security mechanisms that may easily thwart problems with content injections. The problem lies, as so often, in backwards compatibility. The most recent browser versions are obviously more secure than the previous ones. But when somebody frames your website, they can tell it to run in a compatibility mode. This feature only applies to Internet Explorer, but it will bring back the vintage rendering algorithms from IE7 (2006). In Internet Explorer, the document mode is inherited from the top window to all frames. If the evil websites runs in IE7 compatibility mode, then so does yours! This is an example of how IE7 compatibility can be triggered in any website:

<meta http-equiv="X-UA-Compatible" content="IE=7" />

If your website would not allow to be framed, your IE users were not at risk.

Another technique for possible attackers comes with window.name. This attribute of your browsing window (a tab, a popup and a frame are all windows in JavaScript’s sense) can be set by others and you cannot prevent it. The implications of this are manifold, but just for the sake of Cross-Site-Scripting (XSS) attacks it may make things for an attacker much easier. Sometimes, when an attacker is able to inject and execute scripts on your web page, he might be hindered by a length restriction. Say, for example, your website does not allow names that exceed 80 characters. Or messages that must not exceed 140. The window.name property can help bypassing these restriction in a very easy way. The attacker can just frame your website and give it a name of his liking, by supplying it in the frame’s name attribute. The JavaScript he will then execute can be as short as <svg/onload=eval(name)>, which means that it will execute the JavaScript specified in the name attribute of the frame element.

These and many other attacks are possible if you allow your web page to be displayed in a frame. Just recently, Isaac Dawson from Veracode has published a report about security headers on the top 1 million websites, which shows, that only 30,000 of them currently supply this header. However, the fact that many other sites are vulnerable to these sort of attacks is not a good reason to leave your website unprotected. You can easily address many security problems by just adding this simple header to your web application right away: If you’re using Django, check out the XFrameOptionsMiddleware. For NodeJS applications, you can use the helmet library to add security headers. If you want to set this header directly from within Apache or nginx, just take a look at the X-Frame-Options article on MDN.