Introducing the Columbia Convening on Openness and AI

We brought together experts to tackle a critical question: What does openness mean for AI, and how can it best enable trustworthy and beneficial AI?

Group photo of the participants at The Columbia Convening on Openness and AI in February
Participants in the Columbia Convening on Openness and AI.

On February 29, Mozilla and the Columbia Institute of Global Politics brought together over 40 leading scholars and practitioners working on openness and AI. These individuals — spanning prominent open source AI startups and companies, non-profit AI labs, and civil society organizations — focused on exploring what “open” should mean in the AI era. Open source software helped make the internet safer and more robust in earlier eras of the internet — and offered trillions of dollars of value to startups and innovators as they created the digital services we all use today. Our shared hope is that open approaches can have a similar impact in the AI era.

To help unlock this significant potential, the Columbia Convening took an important step toward developing a framework for openness in AI and unifying the openness community around shared understandings and next steps. Participatants noted that: 

  • Openness in AI has the potential to advance key societal goals, including making AI safe and effective, unlocking innovation and competition in the AI market, and bringing underserved communities into the AI ecosystem.
  • Openness is a key characteristic to consider throughout the AI stack, and not just in AI models themselves. In components ranging from data to hardware to user interfaces, there are different types of openness that can be helpful for accomplishing different technical and societal goals. Participants reviewed research mapping dimensions of openness in AI, and noted the need to make it easier for developers of AI systems to understand where and how openness should be central to the technology they build.
  • Policy conversations need to be more thoughtful about the benefits and risks of openness in AI. For example, comparing the marginal risk that open systems pose in relation to closed systems is one promising approach to bringing rigor to this discussion. More work is needed across the board — from policy research on liability distribution, to more submissions to the National Telecommunications and Information Administration’s request for comment on “dual-use foundation models with widely available model weights.”
  • We need a stronger community and better organization to help build, invest, and advocate for better approaches to openness in AI. This convening showed that the openness community can have collaborative, productive discussions even when there are meaningful differences of opinion between its members. Mozilla committed to continuing to help build and foster community on this topic.

Getting “open” right for AI will be hard — but it’s never been more timely or important. Today, while everyone gushes about how generative AI can change the world, only a handful of products dominate the generative AI market. The lack of competition in AI products today is a real problem. It could mean that the new AI products we’ll begin to see in the next several years won’t be as innovative and safe as we need them to be – but instead, be built on the same closed, proprietary model that has defined roughly the last decade of online life. That’s why Mozilla’s recent report on Accelerating Progress Toward Trustworthy AI doubles down on openness, competition, and accountability as vital to the future of AI.

We know a better future is possible. During earlier eras of the Internet, open source technologies played a core role in promoting innovation and safety. Open source software made it easier to find and fix bugs in software. Attempts to limit open innovation — such as export controls on encryption in early web browsers — ended up being counterproductive, further exemplifying the value of openness. And, perhaps most importantly, open source technology has provided a core set of building blocks that software developers have used to do everything from create art to design vaccines to develop apps that are used by people all over the world; it is estimated that open source software is worth over $8 trillion in value. 

For years, we saw similar benefits play out for AI. Industry researchers openly published foundational AI research and frameworks, making it easier for academics and startups to keep pace with AI advances and enabling an ecosystem of external experts who could challenge the big AI players. But, the benefits of this approach are not assured as we enter a new wave of innovation around AI. As training AI systems requires more compute and data, some key players are shifting their attention away from publishing research and toward consolidating competitive advantages and economies of scale to enable foundational models on demand. As AI risks are being portrayed as murkier and more hypothetical, it is becoming easier to argue that locking down AI models is the safest path forward. Today, it feels like the benefits and risks of AI depend on the whims of a few tech companies in Silicon Valley.

This can’t be the best approach to AI. If AI is truly so powerful and pervasive, shouldn’t AI be subject to real scrutiny from third-party assessments? If AI is truly so innovative and useful, shouldn’t there be more AI tools and systems that startups and small businesses can use?

We believe openness can and must play a key role in the future of AI — the question is how. Late last year, we and over 1,800 people signed our letter that noted that although the signatories represent different perspectives on open source AI, they all agree that open, responsible, and transparent approaches are critical to safety and security in the AI era. Indeed, across the AI ecosystem, some advocate for staged release of AI models, others believe other forms of openness in the AI stack are more important, and yet others believe every part of AI systems should be as open as possible. There are people who believe in openness for openness’ sake, and others who view openness as a means to other societal goals — such as identifying civil rights and privacy harms, promoting innovation and competition in the market, and supporting consumers and workers who want a say about how AI is deployed in their communities. We were thrilled to bring together people with very divergent views and motivations for openness collaborating on strengthening and leveraging openness in support of their missions.

We’re immensely grateful to the participants in the Columbia Convening on Openness and AI:

  • Anthony Annunziata — Head of AI Open Innovation and AI Alliance, IBM
  • Mitchell Baker— Chairwoman, Mozilla Foundation
  • Kevin Bankston — ​​Senior Advisor on AI Governance, Center for Democracy and Technology
  • Adrien Basdevant — Tech Lawyer, Entropy Law
  • Ayah Bdeir — Senior Advisor, Mozilla
  • Philippe Beaudoin — Co-Founder and CEO, Waverly
  • Brian Behlendorf — Chief AI Strategist, The Linux Foundation
  • Stella Biderman — Executive Director, EleutherAI
  • John Borthwick — CEO, Betaworks
  • Zoë Brammer — Senior Associate for Cybersecurity & Emerging Technologies, Institute for Security and Technology
  • Glenn Brown — Principal, GOB Advisory
  • Kasia Chmielinski — Practitioner Fellow, Stanford Center on Philanthropy and Civil Society
  • Peter Cihon — Senior Policy Manager, Github
  • Julia Rhodes Davis — Chief Program Officer, Computer Says Maybe
  • Merouane Debbah — Senior Scientific AI Advisor, Technology Innovation Institute
  • Alix Dunn — Facilitator, Computer Says Maybe
  • Michelle Fang — Strategy, Cerebras Systems
  • Camille François — Faculty Affiliate, Institute for Global Politics at Columbia University’s School of Public and International Affairs
  • Stefan French — Product Manager, Mozilla.ai
  • Yacine Jernite — Machine Learning and Society Lead, Hugging Face
  • Amba Kak — Executive Director, AI Now Institute
  • Sayash Kapoor — Ph.D. Candidate, Princeton University
  • Helen King-Turvey — Managing Partner, Philanthropy Matters
  • Kevin Klyman — AI Policy Researcher, Stanford Institute for Human-Centered AI
  • Nathan Lambert — ML Scientist, Allen Institute for AI 
  • Yann LeCun — Vice President and Chief AI Scientist, Meta
  • Stefano Maffulli — Executive Director, Open Source Initiative
  • Nik Marda — Technical Lead, AI Governance, Mozilla
  • Ryan Merkley — CEO, Conscience
  • Mohamed Nanabhay — Managing Partner, Mozilla Ventures
  • Deval Pandya — Vice President of AI Engineering, Vector Institute
  • Deb Raji — Fellow at Mozilla and PhD Student, UC Berkeley
  • Govind Shivkumar — Director, Investments, Omidyar Network 
  • Aviya Skowron — Head of Policy and Ethics, EleutherAI
  • Irene Solaiman — Head of Global Policy, HuggingFace
  • Madhulika Srikumar, Lead for Safety Critical AI, Partnership on AI
  • Victor Storchan — Lead AI/ ML Research at Mozilla.ai
  • Mark Surman — President, Mozilla Foundation
  • Nabiha Syed — CEO, The Markup
  • Martin Tisne — CEO, AI Collaborative, The Omidyar Group
  • Udbhav Tiwari, Head of Global Product Policy, Mozilla
  • Justine Tunney — Founder, Mozilla’s LLaMAfile project
  • Imo Udom — SVP of Innovation, Mozilla
  • Sarah Myers West — Managing Director, AI Now Institute

In the coming weeks, we intend to publish more content related to the convening. We will release resources to help practitioners and policymakers grapple with the opportunities and risks from openness in AI, such as determining how openness can help make AI systems safer and better. We will also continue to bring similar communities together, helping to keep pushing forward on this important work.


Share on Twitter