Last week, the Indian government proposed sweeping changes to the legal protections for “intermediaries”, which affect every internet company today. Intermediary liability protections have been fundamental to the growth of the internet as an open and secure medium of communication and commerce. Whether Section 79 of the Information Technology Act in India (under which these new rules are proposed), the EU’s E-Commerce Directive, or Section 230 of the US’ Communications Decency Act, these legal provisions ensure that companies generally have no obligations to actively censor and limited liability for illegal activities and postings of their users until they know about it. In India, the landmark Shreya Singhal judgment had clarified in 2015 that companies would only be expected to remove content when directed by a court order to do so.
The new rules proposed by the Ministry of Electronics and Information Technology (MEITY) turn this logic on its head. They propose that all “intermediaries”, ranging from social media and e-commerce platforms to internet service providers, be required to proactively remove “unlawful” user content, or else face liability for content on their platform. They also propose a sharp blow to end-to-end encryption technologies, used to secure most popular messaging, banking, and e-commerce apps today, by requiring services to make available information about the creators or senders of content to government agencies for surveillance purposes.
The government has justified this move based on “instances of misuse of social media by criminals and anti-national elements”, citing lynching incidents spurred on by misinformation campaigns. We recognize that harmful content online – from hate speech and misinformation to terrorist content – undermines the overall health of the internet and stifles its empowering potential. However, the regulation of speech online necessarily calls into play numerous fundamental rights and freedoms guaranteed by the Indian constitution (freedom of speech, right to privacy, due process, etc), as well as crucial technical considerations (‘does the architecture of the internet render this type of measure possible or not’, etc). This is a delicate and critical balance, and not one that should be approached with such maladroit policy proposals.
Our five main concerns are summarised here, and we will build on these for our filing to MEITY:
- The proactive obligation on services to remove “unlawful” content will inevitably lead to over-censorship and chill free expression.
- Automated and machine-learning solutions should not be encouraged as a silver bullet to fight against harmful content on the internet.
- One-size-fits-all obligations for all types of online services and all types of unlawful content is arbitrary and disproportionately harms smaller players.
- Requiring services to decrypt encrypted data, weakens overall security and contradicts the principles of data minimisation, endorsed in MEITYs draft data protection bill.
- Disproportionate operational obligations, like mandatorily incorporating in India, are likely to spur market exit and deter market entry for SMEs.
We do need to find ways to hold social media platforms to higher standards of responsibility, and acknowledge that building rights-protective frameworks for tackling illegal content on the internet is a challenging task. However, whittling down intermediary liability protections and undermining end-to-end encryption are blunt and disproportionate tools that fail to strike the right balance. We stress that any regulatory intervention on this complex issue must be preceded by a wide ranging and participatory dialogue. We look forward to continue constructive engagement with MEITY and other stakeholders on this issue.