One hour takedown deadlines: The wrong answer to Europe’s content regulation question

We’ve written a lot recently about the dangers that the EU Terrorist Content regulation poses to internet health and user rights, and efforts to combat violent extremism. One aspect that’s particularly concerning is the rule that all online hosts must remove ‘terrorist content’ within 60 minutes of notification. Here we unpack why that obligation is so problematic, and put forward a more nuanced approach to content takedowns for EU lawmakers.

Since the early days of the web, ‘notice & action’ has been the cornerstone of online content moderation. As there is so much user-generated content online, and because it is incredibly challenging for an internet intermediary to have oversight of each and every user activity, the best way to tackle illegal or harmful content is for online intermediaries to take ‘action’ (e.g. remove it) once they have been ‘notified’ of its existence by a user or another third party. Despite the fast-changing nature of internet technology and policy, this principle has shown remarkable resilience. While it often works imperfectly and there is much that could be done to make the process more effective, it remains a key tool for online content control.

Unfortunately, the EU’s Terrorist Content regulation stretches this tool beyond its limit. Under the proposed rules, all hosting services, regardless of their size, nature, or exposure to ‘terrorist content’ would be obliged to put in place technical and operational infrastructure to remove content within 60 minutes of notification. There’s three key reasons why this is a major policy error:

  • Regressive burden: Not all internet companies are the same, and it is reasonable to suggest that in terms of online content control, those who have more should do more. More concretely, it is intuitive that a social media service with billions in revenue and users should be able to remove notified content more quickly than a small family-run online service with a far narrower reach. Unfortunately however, this proposal forces all online services – regardless of their means – to implement the same ambitious 60-minute takedown timeframe. This places a disproportionate burden on those least able to comply, giving an additional competitive advantage to the handful of already dominant online platforms.
  • Incentivises over-removal: A crucial aspect of the notice & action regime is the post-notification review and assessment. Regardless of whether a notification of suspected illegal content comes from a user, a law enforcement authority, or a government agency, it is essential that online services review the notification to assess its validity and conformity with basic evidentiary standards. This ‘quality assurance’ aspect is essential given how often notifications are either inaccurate, incomplete, or in some instances, bogus. However, a hard deadline of 60 minutes to remove notified content makes it almost impossible for most online services to do the kind of content moderation due diligence that would minimise this risk. What’s likely to result is the over-removal of lawful content. Worryingly, the risk is especially high for ‘terrorist content’ given its context-dependent nature and the thin line between intentionally terroristic and good-faith public interest reporting.
  • Little proof that it actually works: Most troubling about the European Commission’s 60-minute takedown proposal is that there doesn’t seem to be any compelling reason why 60 minutes is an appropriate or necessary timeframe. To this date, the Commission has produced no research or evidence to justify this approach; a surprising state of affairs given how radically this obligation departs from existing policy norms. At the same time, a ‘hard’ 60 minute deadline strips the content moderation process of strategy and nuance, allowing for no distinction between type of terrorist content, it’s likely reach, or the likelihood that it will incite terrorist offences. With no distinction there can be no prioritisation.

For context, the decision by the German government to mandate a takedown deadline of 24 hours for ‘obviously illegal’ hate speech in its 2017 ‘NetzDG’ law sparked considerable controversy on the basis of the risks outlined above. The Commission’s proposal brings a whole new level of risk. Ultimately, the 60-minute takedown deadline in the Terrorist Content regulation is likely to undermine the ability for new and smaller internet services to compete in the marketplace, and creates the enabling environment for interference with user rights. Worse, there is nothing to suggest that it will help reduce the terrorist threat or the problem of radicalisation in Europe.

From our perspective, the deadline should be replaced by a principle-based approach, which ensures the notice & action process is scaled according to different companies’ exposure to terrorist content and their resources. For that reason, we welcome amendments that have been suggested in some European Parliament committees that call for terrorist content to be removed ‘expeditiously’ or ‘without undue delay’ upon notification. This approach would ensure that online intermediaries make the removal of terrorist content from their services a key operational objective, but in a way which is reflective of their exposure, the technical architecture, their resources, and the risk such content is likely to pose.

As we’ve argued consistently, one of the EU Terrorist Content regulation’s biggest flaws is its lack of any proportionality criterion. Replacing the hard 60-minute takedown deadline with a principle-based approach would go a long way towards addressing that. While this won’t fix everything – there are still major concerns with regard to upload filtering, the unconstrained role of government agencies, and the definition of terrorist content – it would be an important step in the right direction.