Categories: privacy

India’s new intermediary liability and digital media regulations will harm the open internet

Last week, in a sudden move that will have disastrous consequences for the open internet, the Indian government notified a new regime for intermediary liability and digital media regulation. Intermediary liability (or “safe harbor”) protections have been fundamental to growth and innovation on the internet as an open and secure medium of communication and commerce. By expanding the “due diligence” obligations that intermediaries will have to follow to avail safe harbor, these rules will harm end to end encryption, substantially increase surveillance, promote automated filtering and prompt a fragmentation of the internet that would harm users while failing to empower Indians. While many of the most onerous provisions only apply to “significant social media intermediaries” (a new classification scheme), the ripple effects of these provisions will have a devastating impact on freedom of expression, privacy and security.

As we explain below, the current rules are not fit-for-purpose and will have a series of unintended consequences on the health of the internet as a whole:

  • Traceability of Encrypted Content: Under the new rules, law enforcement agencies can demand that companies trace the ‘first originator’ of any message. Many popular services today deploy end-to-end encryption and do not store source information so as to enhance the security of their systems and the privacy they guarantee users. When the first originator is from outside India, the significant intermediary must identify the first originator within the country, making an already impossible task more difficult. This would essentially be a mandate requiring encrypted services to either store additional sensitive information or/and break end-to-end encryption which would weaken overall security, harm privacy and contradict the principles of data minimization endorsed in the Ministry of Electronic and Information Technology’s (MeitY) draft of the data protection bill.
  • Harsh Content Take Down and Data Sharing Timelines: Short timelines of 36 hours for content take downs and 72 hours for the sharing of user data for all intermediaries pose significant implementation and freedom of expression challenges. Intermediaries, especially small and medium service providers, would not have sufficient time to analyze the requests or seek any further clarifications or other remedies under the current rules. This would likely create a perverse incentive to take down content and share user data without sufficient due process safeguards, with the fundamental right to privacy and freedom of expression (as we’ve said before) suffering as a result.
  • User Directed Take Downs of Non-Consensual Sexually Explicit Content and Morphed/Impersonated Content: All intermediaries have to remove or disable access to information within 24 hours of being notified by users or their representatives (not necessarily government agencies or courts) when it comes to non-consensual sexually explicit content (revenge pornography, etc.) and impersonation in an electronic form (deep fakes, etc.). While it attempts to solve for a legitimate and concerning issue, this solution is overbroad and goes against the landmark Shreya Singhal judgment, by the Indian Supreme Court, which had clarified in 2015 that companies would only be expected to remove content when directed by a court order or a government agency to do so.
  • Social Media User Verification: In a move that could be dangerous for the privacy and anonymity of internet users, the law contains a provision requiring significant intermediaries to provide the option for users to voluntarily verify their identities. This would likely entail users sharing phone numbers or sending photos of government issued IDs to the companies. This provision will incentivize the collection of sensitive personal data that are submitted for this verification, which can then be also used to profile and target users (the law does seem to require explicit consent to do so). This is not hypothetical conjecture – we have already seen phone numbers collected for security purposes being used for profiling. This provision will also increase the risk from data breaches and entrench power in the hands of large players in the social media and messaging space who can afford to build and maintain such verification systems. There is no evidence to prove that this measure will help fight misinformation (its motivating factor), and it ignores the benefits that anonymity can bring to the internet, such as whistle blowing and protection from stalkers.
  • Automated Filtering: While improved from its earlier iteration in the 2018 draft, the provisions to “endeavor” to carry out automated filtering for child sexual abuse materials (CSAM), non-consensual sexual acts and previously removed content apply to all significant social media intermediaries (including end to end encrypted messaging applications). These are likely fundamentally incompatible with end to end encryption and will weaken protections that millions of users have come to rely on in their daily lives by requiring companies to embed monitoring infrastructure in order to continuously surveil the activities of users with disastrous implications for freedom of expression and privacy.
  • Digital Media Regulation: In a surprising expansion of scope, the new rules also contain government registration and content take down provisions for online news websites, online news aggregators and curated audio-visual platforms. After some self regulatory stages, it essentially gives government agencies the ability to order the take down of news and current affairs content online by publishers (which are not intermediaries), with very few meaningful checks and balances against over reach.

The final rules do contain some improvements from the 2011 original law and the 2018 draft such as limiting the scope of some provisions to significant social media intermediaries, user and public transparency requirements, due process checks and balances around traceability requests, limiting the automated filtering provision and an explicit recognition of the “good samaritan” principle for voluntary enforcement of platform guidelines. In their overall scope, however, they are a dangerous precedent for internet regulation and need urgent reform.

Ultimately, illegal and harmful content on the web, the lack of sufficient accountability  and substandard responses to it undermine the overall health of the internet and as such, are a core concern for Mozilla. We have been at the forefront of these conversations globally (such as the UK, EU and even the 2018 version of this draft in India), pushing for approaches that manage the harms of illegal content online within a rights-protective framework. The regulation of speech online necessarily calls into play numerous fundamental rights and freedoms guaranteed by the Indian constitution (freedom of speech, right to privacy, due process, etc), as well as crucial technical considerations (‘does the architecture of the internet render this type of measure possible or not’, etc). This is a delicate and critical balance, and not one that should be approached with blunt policy proposals.

These rules are already binding law, with the provisions for significant social media intermediaries coming into force 3 months from now (approximately late May 2021). Given the many new provisions in these rules, we recommend that they should be withdrawn and be accompanied by wide ranging and participatory consultations with all relevant stakeholders prior to notification.