Categories: Europe Safety User Rights

Mozilla Foundation fellow weighs in on flawed EU Terrorist Content regulation

As we’ve noted previously, the EU’s proposed Terrorist Content regulation would seriously undermine internet health in Europe, by forcing companies to aggressively suppress user speech with limited due process and user rights safeguards. Yet equally concerning is the fact that this proposal is likely to achieve little in terms of reducing the actual terrorism threat or the phenomenon of radicalisation in Europe. Here, Mozilla Foundation Tech Policy fellow and community security expert Stefania Koskova* unpacks why, and proposes an alternative approach for EU lawmakers.

With the proposed Terrorist Content regulation, the EU has the opportunity to set a global standard in how to effectively address what is a pressing public policy concern. To be successful, harmful and illegal content policies must carefully and meaningfully balance the objectives of national security, internet-enabled economic growth and human rights. Content policies addressing national security threats should reflect how internet content relates to ‘offline’ harm and should provide sufficient guidance on how to comprehensively and responsibly reduce it in parallel with other interventions. Unfortunately, the Commission’s proposal falls well short in this regard.

Key shortcomings:

  • Flawed definitions: In its current form there is a considerable lack of clarity and specificity in the definition of ‘terrorist content’, which creates unnecessary confusion between ‘terrorist content’ and terrorist offences. Biased application, including through the association of terrorism with certain national or religious minorities and certain ideologies, can lead to serious harm and real-world consequences. This in turn can contribute to further polarisation and radicalisation.
  • Insufficient content assessment: Within the proposal there is no standardisation of the ‘terrorist content’ assessment procedure from a risk perspective, and no standardisation of the evidentiary requirements that inform content removal decisions by government authorities or online services. Member States and hosting service providers are asked to evaluate the terrorist risk associated with specific online content, without clear or precise assessment criteria.
  • Weak harm reduction model: Without a clear understanding of the impact of ‘terrorist content’ on the radicalisation process in specific contexts and circumstances, it seems inadvisable and contrary to the goal of evidence-based policymaking to assume that removal, blocking, or filtering will reduce radicalisation and prevent terrorism. Further, potential adverse effects of removal, blocking, and filtering, such as fueling grievances of those susceptible to terrorist propaganda, are not considered.

As such, the European Commission’s draft proposal in its current form creates additional risks with only vaguely defined benefits to countering radicalisation and preventing terrorism. To ensure the most negative outcomes are avoided, the following amendments to the proposal should be made as a matter of urgency:

  • Improving definition of terrorist content: The definition of ‘terrorist content’ should be clarified such that it depends on illegality and intentionality. This is essential to protect the public interest speech of journalists, human rights defenders, and other witnesses and archivists of terrorist atrocities.
  • Disclosing ‘what counts’ as terrorism through transparency reporting and monitoring: The proposal should ensure that Member States and hosting platforms are obliged to report on how much illegal terrorist content is removed, blocked or filtered under the regulation – broken down by category of terrorism (incl. nationalist-separatist, right-wing, left-wing, etc.) and the extent to which content decision and action was linked to law enforcement investigations. With perceptions of terrorist threat in the EU diverging across countries and across the political spectrum, this can safeguard against intentional or unintentional bias in implementation.
  • Assessing security risks: In addition to to being grounded in a legal assessment, content control actions taken by competent authorities and companies should be strategic –  i.e. be based on an assessment of the content’s danger to public safety and the likelihood that it will contribute to the commission of terrorist acts.  This risk assessment should also take into account the likely negative repercussions arising from content removal/blocking/filtering.
  • Focusing on impact: The proposal should require or ensure that all content policy measures are closely coordinated and coincide with the deployment of strategic radicalisation counter-narratives, and broader terrorism prevention and rehabilitation programmes.

The above recommendations address shortcomings in the proposal in the terrorism prevention context. Additionally, however, there remains the contested issue of 60-minute content takedowns and mandated proactive filtering, both of which are serious threats to internet health. There is an opportunity, through the parliamentary procedure, to address these concerns. Constructive feedback, including specific proposals that can significantly improve the current text, has been put forward by EU Parliament Committees, civil society and industry representatives.

The stakes are high. With this proposal, the EU can create a benchmark for how democratic societies should address harmful and illegal online content without compromising their own values. It is imperative that lawmakers take the opportunity.

*Stefania Koskova is a Mozilla Foundation Tech Policy fellow and a counter-radicalisation practitioner. Learn more about her Mozilla Foundation fellowship here.