Content blocking has become a hot issue across the Web and mobile ecosystems. It was already becoming pervasive on desktop, and now Apple’s iOS has made it possible to develop iOS applications whose purpose is to block content. This caused the most recent flurry of activity, concern and focus. We need to pay attention.
Content blocking is not going away – it is now part of our online experience. But the landscape isn’t well understood, making it harder to know how best to advance a healthy, open Web. Users want it –whether to avoid the display of ads, protect against unwanted tracking, improve load speed, or reduce data consumption– and we need to address how we as an industry should respond. We wanted to start by hacking on proposed principles for content blocking. The growing availability and use of content blockers tells us that users want to control their experience.
This is a good thing. But some content blocking could be harmful in ways that may not be obvious. For example, if content blocking creates new gatekeepers who can pick winners and losers in the publishing space or who favor their own content over others’, it ultimately harms competition and innovation. In the long run, users could lose as much control as they gain. The same happens if the commercial model of the Web is not part of the content blocking debate.
In my last post, I conveyed our intention to engage with this landscape, not solely through analysis and research, but also through experimentation, product development, and advocacy.
To help guide our efforts and hopefully inform others, we’ve developed three proposed “content blocking principles” that would help advance the beneficial effects of content blocking while minimizing the risks. We want your help hacking on them. Just as our data privacy principles help guide our data practices, these content blocking principles will help guide what we build and what we support across the industry.
Content is not inherently good or bad – with some notable exceptions, such as malware. So these principles aren’t about what content is OK to block and what isn’t. They speak to how and why content can be blocked, and how the user can be maintained at the center through that process.
At Mozilla, our mission is to ensure a Web that is open and trusted and that puts our users in control. For content blocking, here is what we think that means:
- Content Neutrality: Content blocking software should focus on addressing potential user needs (such as on performance, security, and privacy) instead of blocking specific types of content (such as advertising).
- Transparency & Control: The content blocking software should provide users with transparency and meaningful controls over the needs it is attempting to address.
- Openness: Blocking should maintain a level playing field and should block under the same principles regardless of source of the content. Publishers and other content providers should be given ways to participate in an open Web ecosystem, instead of being placed in a permanent penalty box that closes off the Web to their products and services.
Tell us what you think of these proposed principles on your social channels using #contentblocking and join us on Friday October 9 at 11am PT for our #BlockParty, a conversation around the problems and possible solutions to the content blocking question. We look forward to working with our users, our partners and the rest of the Web ecosystem to advance our shared goal of a healthy, open Web.