In September the European Commission proposed a new regulation that seeks to tackle the spread of ‘terrorist’ content on the internet. As we’ve noted already, the Commission’s proposal would seriously undermine internet health in Europe, by forcing companies to aggressively suppress user speech with limited due process and user rights safeguards. Here we unpack the proposal’s shortfalls, and explain how we’ll be engaging on it to protect our users and the internet ecosystem.
As we’ve highlighted before, illegal content is symptomatic of an unhealthy internet ecosystem, and addressing it is something that we care deeply about. To that end, we recently adopted an addendum to our Manifesto, in which we affirmed our commitment to an internet that promotes civil discourse, human dignity, and individual expression. The issue is also at the heart of our recently published Internet Health Report, through its dedicated section on digital inclusion.
At the same time lawmakers in Europe have made online safety a major political priority, and the Terrorist Content regulation is the latest legislative initiative designed to tackle illegal and harmful content on the internet. Yet, while terrorist acts and terrorist content are serious issues, the response that the European Commission is putting forward with this legislative proposal is unfortunately ill-conceived, and will have many unintended consequences. Rather than creating a safer internet for European citizens and combating the serious threat of terrorism in all its guises, this proposal would undermine due process online; compel the use of ineffective content filters; strengthen the position of a few dominant platforms while hampering European competitors; and, ultimately, violate the EU’s commitment to protecting fundamental rights.
Many elements from the proposal are worrying, including:
- The definition of ‘terrorist’ content is extremely broad, opening the door for a huge amount of over-removal (including the potential for discriminatory effect) and the resulting risk that much lawful and public interest speech will be indiscriminately taken down;
- Government-appointed bodies, rather than independent courts, hold the ultimate authority to determine illegality, with few safeguards in place to ensure these authorities act in a rights-protective manner;
- The aggressive one hour timetable for removal of content upon notification is barely feasible for the largest platforms, let alone the many thousands of micro, small and medium-sized online services whom the proposal threatens;
- Companies could be forced to implement ‘proactive measures’ including upload filters, which, as we’ve argued before, are neither effective nor appropriate for the task at hand; and finally,
- The proposal risks making content removal an end in itself, simply pushing terrorist off the open internet rather than tackling the underlying serious crimes.
As the European Commission acknowledges in its impact assessment, the severity of the measures that it proposes could only ever be justified by the serious nature of terrorism and terrorist content. On its face, this is a plausible assertion. However, the evidence base underlying the proposal does not support the Commission’s approach. For as the Commission’s own impact assessment concedes, the volume of ‘terrorist’ content on the internet is on a downward trend, and only 6% of Europeans have reported seeing terrorist content online, realities which heighten the need for proportionality to be at the core of the proposal. Linked to this, the impact assessment predicts that an estimated 10,000 European companies are likely to fall within this aggressive new regime, even though data from the EU’s police cooperation agency suggests terrorist content is confined to circa 150 online services.
Moreover, the proposal conflates online speech with offline acts, despite the reality that the causal link between terrorist content online, radicalisation, and terrorist acts is far more nuanced. Within the academic research around terrorism and radicalisation, no clear and direct causal link between terrorist speech and terrorist acts has been established (see in particular, research from UNESCO and RAND). With respect to radicalisation in particular, the broad research suggests exposure to radical political leaders and socio-economic factors are key components of the radicalisation process, and online speech is not a determinant. On this basis, the high evidential bar that is required to justify such a serious interference with fundamental rights and the health of the internet ecosystem is not met by the Commission. And in addition, the shaky evidence base demands that the proposal be subject to far greater scrutiny than it has been afforded thus far.
Besides these concerns, it is saddening that this new legislation is likely to create a legal environment that will entrench the position of the largest commercial services that have the resources to comply, undermining the openness on which a healthy internet thrives. By setting a scope that covers virtually every service that hosts user content, and a compliance bar that only a handful of companies are capable of reaching, the new rules are likely to engender a ‘retreat from the edge’, as smaller, agile services are unable to bear the cost of competing with the established players. In addition, the imposition of aggressive take-down timeframes and automated filtering obligations is likely to further diminish Europe’s standing as a bastion for free expression and due process.
Ultimately, the challenge of building sustainable and rights-protective frameworks for tackling terrorism is a formidable one, and one that is exacerbated when the internet ecosystem is implicated. With that in mind, we’ll continue to highlight how the nuanced interplay between hosting services, terrorist content, and terrorist acts mean this proposal requires far more scrutiny, deliberation, and clarification. At the very least, any legislation in this space must include far greater rights protection, measures to ensure that suppression of online content doesn’t become an end in itself, and a compliance framework that doesn’t make the whole internet march to the beat of a handful of large companies.
Stay tuned.