Shutting Down XSS with Content Security Policy

Brandon Sterne

28

For several years, Cross-Site Scripting (XSS) attacks have plagued many of the web’s most popular sites and victimized their users. At Mozilla, we’ve been working for the last year on a new technology called Content Security Policy, designed to shut these attacks down. We wanted to give a bit of background on this project as well as provide an update on our progress so far.

XSS is possible because all the content received as part of a web server response is treated with equal privilege by the requesting browser. JavaScript and other content included in a web page are all combined into a single security context which has full access to the DOM. Content Security Policy (CSP) provides a mechanism for sites to explicitly tell the browser which content is legitimate. The browser can then disregard any content which has not been blessed by the site.

In order to differentiate legitimate content from injected or modified content, CSP requires that all JavaScript for a page be 1) loaded from an external file, and 2) served from an explicitly approved host. This means that all inline script, javascript: URIs, and event-handling HTML attributes will be ignored. Only script included via a <script> tag pointing to a white-listed host will be treated as valid. Additionally, CSP allows several other common-sense security restrictions to be enforced.

We realize that this model is dramatically different than the current unrestricted model for the Web. We offer the following case supporting CSP’s adoption:

  1. CSP can be implemented in phases.

    While the biggest security benefit offered by Content Security Policy is the mitigation of XSS through inline script blocking, the migration of all JavaScript to external files may be challenging or time-consuming for some sites. Therefore, sites may choose to use the other features of Content Security Policy without adopting the JavaScript restrictions. Our hope is that this flexibility will provide a wide gate for such sites to adopt CSP in a limited fashion early, and later move toward a full implementation as time and resources permit.

  2. Even complex sites can be modified to support CSP.

    We have looked at HTML/JavaScript samples from a wide variety of websites ranging in complexity and have yet to see an example which could not be modified to support CSP. We’ll provide documentation regarding best practices for migrating a site to use CSP. Content Security Policy is also consistent with the programming paradigm “don’t mix code with content” so there may be additional functional benefits to be gained by implementing such separation.

  3. Drive a stake through the heart of XSS!

    XSS vulnerabilities have real value to attackers and are shared rapidly across the Web once discovered. Sites can breathe a little easier knowing that their users are protected, even if a XSS bug slips through. Because CSP can be configured to notify the protected site when an attack is blocked, CSP will even benefit users of older browsers, by helping sites identify and plug vulnerabilities quickly. The bottom line is that it will be extremely difficult to mount a successful XSS attack against a site with CSP enabled. All common vectors for script injection will no longer work and the bar for a successful attack is placed much, much higher.

Content Security Policy has been a collaboration of many individuals and has received input from multiple web sites, browser vendors, and web app security researchers. We are very excited to have reached a level of stability in the design that has allowed us to begin implementation of the CSP specification. Stay tuned for further updates. We will let you know when the fixes have been checked in to trunk and the product is ready to be tested in our nightly builds. Let us know what you think!

Brandon Sterne
Security Program Manager

28 responses

  1. Daniel Veditz wrote on :

    @John Bell: It’s good feedback, we’ll have to think about it. It’s certainly “possible” — it’s just code after all.

    I have mixed feelings. I’d really like to get people excited about CSP, site authors and other browser vendors in particular. A user-agent override doesn’t belong in the specification that is a contract between the site and the browser, and while supporting overrides is a valid implementation decision talking about it is a bit of a distraction and may even undercut the message to site authors. It would also, of course, require more work on our part so I’m not unbiased :-)

    On reflection, though, overrides might be a required workaround to avoid breaking add-ons that create mash-ups or add elements to content. Argh.

  2. bugmenot wrote on :

    When this CSP implementation is expected to be available?

  3. Arun Ranganathan wrote on :

    @Andre: an early version of this solution was submitted to the W3C Web Apps WG. See for example this thread of correspondence:
    http://lists.w3.org/Archives/Public/public-webapps/2008AprJun/0416.html

    The charter of that WG was never modified to include CSP. We’re keen on working with a standards setting body, and are still in the process of determining where the proposal works best.

  4. John Bell wrote on :

    @Daniel Veditz: I certainly understand why you want to keep a narrow focus right now. If it was me, I’d actually think about building it into the spec since it’s something that modifies the ruleset applied to the final page. But obviously it’s your spec, you can handle it however you want. I just suspect that one of the questions you’ll get from anybody you’re trying to get excited about CSP is if it breaks any existing functionality, and (as one of the site authors in question) I know that “yes” isn’t as good an answer to that question as “yes, but we have a way around it”.

    But in any case, I’m glad you’re at least open to the possibility. Thanks for listening.

  5. voracity wrote on :

    Could someone perhaps explain what is wrong with sub-page sandboxing?

    In particular, what does page-level sandboxing solve that sub-page level sandboxing cannot?

  6. AckNack wrote on :

    “On reflection, though, overrides might be a required workaround to avoid breaking add-ons that create mash-ups or add elements to content. Argh.”

    Yes. Greasemonkey would be sure to fail too, unless its given privs to inject javascript and content into web pages as it does now.

    As part of these changes, pleeeeease preserve the ability to allow add-ons to continue to function correctly by allowing event/content/javascript injection as they can now do. For example, the DejaClick addon has a feature to annotate web recording. It does this by injecting user-recorded popup notes (html or SVG) into web pages as it replaying (via HTML and CSS content injection). Also, there will soon be a feature in DejaClick to allow users to inject snippets of Javascript code before or after a replayed event for special circumstances.

    Add-ons developers already went thru a similar fiasco by having to add “contentaccessible=yes” to the content directives of all their chrome.manifest files, so I certainly hope we don’t make the same mistakes with this stuff.

  7. Daniel Veditz wrote on :

    “CSP, brought to you by the same guys who gave you contentaccessible=yes” — I sense our popularity with add-on developers rising by the day.

  8. austin cheney wrote on :

    From a technology perspective this is an elegant solution for two reasons.

    1) It will mostly cure XSS. I have been telling people for months that mitigation is not a solution.

    2) It forces JavaScript off the damn page. This is something HTML should have fixed years ago. In page JavaScript has been a horrible culprit in namespace collisions that can occur from third-party ads that crash your own native code.

    I have noticed four problems that may cause this proposition to fail:

    1) Advertising business interference
    Unfortunately this solution is horribly flawed from a business perspective. Business on the web is extremely reliant upon advertisements whose metrics are tracked by execution of client-side script supplied by those ads. It seems this solution will also eliminate this revenue stream. I expect this to meet some serious criticism.

    2) Not a standard
    I see no indication that this venture is representative of any sort of standard. Security is a universal problem that demands a universal solution. A Firefox favoring proprietary solution will certainly benefit Firefox as a product and reposition it in the grab for market share, but it will not fix the problem. I strongly recommend that this action be written as an internet draft and submitted to the IETF as an extension of URI/HTTP.

    3) No client-side control mechanism
    The root of the problem is that malicious software written by a stranger is executing at will at the user-agent. This solution provides a mechanism for the site to inform the browser of the approved script repository where that script only is then executed in accordance with the status quo. That mechanism will likely occur as an HTTP header to provide transparency to the user. Unfortunately this still does not provide any control to the user of which code should or should not execute. If malicious code is living in the blessed repository then nothing is gained and attacks continue without the user’s knowledge. A user must be able to make decisions about what is executing on their own software on their own machine or the problem is merely diminished instead of solved.

    4) Relay points
    There is no indication that scripts cannot point to a reference outside the blessed sandbox, which opens possibilities of attack from a couple different angles. If script uses the XMLHttpRequest object to point to a location outside the blessed sandbox for additional instructions then nothing is gained by this mechanism. The mere opening of such a connection could be a point of exploitation if the code or data being retrieved is entirely benign. Again, there must be notifications provided to the user before code executes locally without prior discriminatory user consent.

    You guys have my full support on this endeavor as drastic steps must be taken to solve this problem. There are more XSS vulnerabilities reported than all other computer security vulnerabilities combined and doubled. If this problem is not solved by any means necessary commerce on the web will fail, and the web itself will fail.

    Before becoming aware of this work I posted a solution of migrating script execution to email using a secure mechanism. The proposal is completely sound from both a technology and security perspective, but is so far extremely unpopular for eliminating execution of events. Since this alternate solution is based off a more complex transmission mechanism it may or may not be helpful. I hope that may be able to provide any additional guidance that may not have been previously considered.
    http://www.ietf.org/id/draft-cheney-safe-04.txt

More comments: 1 2