A Third Way on AI

Last week was an important moment in the debate about AI, with President Biden issuing an executive order and the UK’s AI Safety Summit convening world leaders.

Much of the buzz around these events made it sound like AI presents us with a binary choice: unbridled optimism, or existential fear. But there was also a third path available, a nuanced, practical perspective that examines the real risks and benefits of AI. 

There have been people promoting this third perspective for years — although GPT-fueled headlines of the past 12 months have often looked past them. They are foundations, think tanks, researchers and activists (including a number of Mozilla fellows and founders) plus the policymakers behind efforts like last year’s AI Blueprint for an AII Bill of Rights.

We were happy to see the executive order echo many of the ideas that have emerged from this school of thought over the last few years, prioritizing practical, responsible AI governance. The UK Safety Summit started on a very different note, anchored in concerns around existential risks — but also some welcome reframing.

As we look forward from this point, it feels important to highlight three key levers that will help us get closer to responsible AI governance: well-designed regulation, open markets, and open source. Some of these were in the news last week, while others require more attention. Together, they have the potential to help us shape AI in ways that are more trustworthy, empowering and equitable. 

Regulation

As we saw last week, there is near consensus that AI presents risks and harms, from the immediate (from discrimination to disinformation) to the longer term (which are still emerging and being explored). There’s also a growing consensus that regulation is a part of the solution.

But what exactly this regulation looks like — and what its outcomes should be — is where consensus breaks down. One thing is clear, though: Any regulatory framework should  protect people from harm and provide mechanisms to hold companies accountable where they cause it. 

The executive order included encouraging elements, balancing the need for a rights-respecting approach to addressing AI’s present risks with exploration of longer-term, more speculative risks. It also acknowledges that the U.S. is still missing critical baseline protections, such as comprehensive privacy legislation that work hand-in-hand with AI-specific rules.

The ideas that dominated the Safety Summit were less encouraging. They reinforced that old binary, either going too far or not far enough. There was a focus on self regulation by AI companies (which isn’t really governance at all). And there were nods towards the idea of licensing large language models (which would only “increase concentration and may worsen AI risks,” in the words of Sayash Kapoor and Arvind Narayanan).

Open markets 

To Arvind and Sayash’s point, there is a problematic concentration of power in the tech industry. Decisions about AI, like who it most benefits or who is even allowed to access it, are made by a handful of people in just a few corners of the world. The majority of people impacted by this technology don’t get to shape it in any meaningful way. 

Competition is an antidote. AI development not just by big companies, but also smaller ones (and nonprofits, too) has the potential to decentralize power. And government action to hinder monopolies and anti-competitive practices can accelerate this. The executive order takes note, calling on the Federal Trade Commission (FTC) to promote competition and protect small businesses and entrepreneurs. 

It’s important for this work to start now — both by enforcing existing competition law, but also by greater adoption of ex-ante interventions like the UK’s DMCC bill. The previous decade showed how quickly incumbent players, like social media platforms, acquire or shut down competitors. And it’s already happening again: Anthropic and OpenAI have familiar investors (Google + Amazon and Microsoft, respectively), and once-independent laboratories like DeepMind were long ago acquired (by Google).  

Open source

For smaller AI players to thrive in the marketplace, the core building blocks of the technology need to be broadly accessible. This has been a key lever in the past — open-source technology allowed a diverse set of companies, like Linux and Firefox, to compete and thrive in the early days of the web.  

Open source has a chance to play a role in fueling competition in AI and, more specifically, large language models. This is something organizations like Ai2, EleutherAI, Mistral, and Mozilla.ai are focused on. Open source AI also has the potential to strengthen AI oversight, allowing governments and public interest groups to scrutinize the technology and call out bias, security flaws, and other issues. We’ve already seen open source catch critical bugs in tooling used for core AI development. While open source isn’t a panacea — and it can be twisted to further consolidate power if it’s not done right — it has huge potential in helping more people participate in and shape the next era of AI. 

It’s important to note that there is a major threat to open source AI emerging: some use the fear of existential risk to propose approaches that would shut down open-source AI. Yes, bad actors could abuse open source AI models — but internet history shows that proprietary technologies are just as likely to be abused. Rushing to shut down open source AI in response to speculative fears, rather than exploring new approaches focused on responsible release, could unnecessarily foreclose our ability to tap into the potential of these technologies. 

Collaboratively dealing with global problems is not a new idea in technology. In fact there are many lessons to learn from previous efforts — from how we dealt with cybersecurity issues like encryption, governed the internet across borders, and worked to counter content moderation challenges like disinformation. What we need to do is take the time to develop a nuanced approach to open source and AI. We are happy to see the EU’s upcoming AI Act exploring these questions, and the recent U.S. executive order instructing the Department of Commerce to collect input on both the risks and benefits of  “dual-use foundation models with widely accessible weights” — in essence, open-source foundation models. This creates a process to develop the kind of nuanced, well-informed approaches we need. 

Which was exactly the goal of the letter on open source and AI safety that we both signed last week — along with over 1,500 others. It was a public acknowledgement that open source and open science are neither a silver bullet nor a danger. They are tools that can be used to better understand risks, bolster accountability, and fuel competition. It also acknowledged that positioning tight and proprietary control of foundational AI models as the only path to safety is naive, and maybe even dangerous.

The letter was just that — a letter. But we hope it’s part of something bigger. Many of us have been calling for AI governance that balances real risks and benefits for years. The signers of the letter include a good collection of these voices — and many new ones, often coming from surprising places. The community of people ready to roll up their sleeves to tackle the thorny problems of AI governance (even alongside people they usually disagree with) is growing. This is exactly what we need at this juncture. There is much work ahead.


Share on Twitter