Big Tech can’t outrun demands for accountability

This content first ran in the Financial Times

For too long, accountability in the tech sector has been allowed to fall short. Information is power, after all, and harms can fester when hidden. It is therefore all the more concerning that Facebook has once again sought to shut down public interest research on its platform. 

Its decision last week to terminate the personal accounts of the New York University researchers who built Ad Observer, an extension that sheds light on political advertising on the platform, was a stark reminder of the fragility of transparency. Facebook says it acted out of concern for user data, a curious claim considering Mozilla’s privacy and security reviews of Ad Observer’s source code found it to be robust and trustworthy.

This is not the first time that Facebook has acted to curtail research into its platform. It sidelined Crowdtangle, its own content-tracking initiative, and blocked transparency tools from ProPublica and Mozilla in 2019. Faced with corporate stonewalling and an increasingly adversarial environment, independent researchers and public interest organisations have worked hard to fill an informational void and uncover the consequences of otherwise hidden harms. But while transparency is afforded by platforms as a generosity, or dependent on public interest bootstrapping and ingenuity, we will never have true accountability. 

The need for meaningful transparency into Big Tech cannot be overstated. Without it, we cannot map harmful online experiences and show how the design and operational choices of platforms may contribute to them. Efforts to regulate for accountability have often felt like grasping into the dark. Fortunately, a corner is about to be turned. Transparency initiatives may no longer be voluntary but a legal obligation.

 The EU’s draft Digital Services Act — currently the subject of fierce Big Tech lobbying in Brussels — signals the beginning of the end of this era of secrecy. The DSA’s innovation, unglamorous as it may sound, is more transparency into content-related harms online and their consequences. This is essential for effective policy responses to the hate speech and disinformation that has become an all-too-common part of our browsing experience. More specifically, the DSA is designed precisely to safeguard and enable the public interest research undertaken by projects like Ad Observer.

 First, platforms with advertising networks will be required to publicly disclose all ads and the groups they intend to target. Purveyors of disinformation and other forms of harmful content have long exploited this ecosystem to amplify political campaigns and microtarget vulnerable users. By mandating platforms to be open and upfront about this information, we can better identify and respond to threats to individuals, communities and our democracies. 

Platforms would also have to allow public interest and privacy-minded researchers to look under the hood. This would allow closer examination of how content is promoted and moderated, and if they protect vulnerable communities from abuse and harassment. The DSA outlines a robust legal framework to ensure researchers can access the hitherto hidden data they need to assess where companies are falling short.

Finally, the DSA beefs up oversight in the tech sector to ensure that rules have weight and platforms meet the promises they make. To that end, platforms will be required to undergo independent third-party audits, as in the financial services sector, to ensure that they treat researchers fairly and are open about their systems and practices. The DSA draft law is a starting point. Transparency is rarely an end in itself, but is a crucial prerequisite to accountability. EU lawmakers must act to protect and enable public interest research. Our communities and democracy depend on it.


Share on Twitter