Navigating the Future of Openness and AI Governance: Insights from the Paris Openness Workshop

In December 2024, in the lead up to the AI Action Summit, Mozilla, Fondation Abeona, École Normale Supérieure (ENS) and the Columbia Institute of Global Politics gathered at ENS in Paris, bringing together a diverse group of AI experts, academics, civil society, regulators and business leaders to discuss a topic increasingly central to the future of AI: what does openness mean and how it can enable trustworthy, innovative, and equitable outcomes?

The workshop followed the Columbia Convenings on Openness and AI, that Mozilla held in partnership with Columbia University’s Institute of Global Politics. These gatherings, held over the course of 2024 in New York and San Francisco, have brought together over 40 experts to address what “openness” should mean in the AI era.

Over the past two years, Mozilla has mounted a significant effort to promote and defend the role of openness in AI. Mozilla launched Mozilla.ai, an initiative focused on ethical, open-source AI tools, and supported small-scale, localized AI projects through its Builders accelerator program. Beyond technical investments, Mozilla has also been a vocal advocate for openness in AI policy, urging governments to adopt regulatory frameworks that foster competition and accountability while addressing risks. Through these initiatives, Mozilla is shaping a future where AI development aligns with public interest values.

This Paris Openness workshop discussion — part of the official ‘Road to the Paris AI Summit’ taking place in February 2025 — looked to bring together the European AI community and form actionable recommendations for policymakers. While it embraced healthy debate and disagreement around issues such as definitions of openness in AI, there was nevertheless broad agreement on the urgency of crafting collective ideas to advance openness while navigating an increasingly complex commercial, political and regulatory landscape.

The stakes could not be higher. As AI continues to shape our societies, economies, and governance systems, openness emerges as both an opportunity and a challenge. On one hand, open approaches can expand access to AI tools, foster innovation, and enhance transparency and accountability. On the other hand, they raise complex questions about safety and misuse. In Europe, these questions intersect with transformative regulatory frameworks like the EU AI Act, which seeks to ensure that AI systems are both safe and aligned with fundamental rights.

As in software development, the goal of being ‘open’ in AI is a crucial one. At its heart, openness, we were reminded in the discussion, is a holistic outlook. For AI in particular it is a pathway to getting to a more pluralistic tool – one that can be more transparent, contextual, participatory and culturally appropriate. Each of these goals however contain natural tensions within them.

A central question of this most recent dialogue challenged participants on the best ways to build with safety in mind while also embracing openness. The day was broken down into two workshops that examined these questions from a technical and policy standpoint.

Running through both of the workshops was the thread of a persistent challenge: the multifaceted nature of the term openness. In the policy context, the term “open-source” can be too narrow, and at times, it risks being seen as an ideological stance rather than a pragmatic tool for addressing specific issues. To address this, many participants felt openness should be framed as a set of components — including open models, data, and tools — each of which has specific benefits and risks.

Examining Technical Perspectives on Openness and Safety

A significant concern for many in the open-source community is getting access to the best existing safety tools. Despite the increasing importance of AI safety, many researchers can find it difficult or expensive to access tools to help identify and address AI risks. In particular the discussion surfaced an increasing tension between some researchers and startups who have found it difficult to access datasets of known CSAM (Child Sexual Abuse Material) hashtags. Accessing these data sets could help mitigate misuse or clean training datasets. The workshop called for broader sharing of safety tools and more support for those working at the cutting edge of AI development.

More widely, some participants were frustrated by perceptions that open source AI development is not bothered by questions of safety. They pointed out that, especially when it comes to regulation, focusing on questions of safety makes them even more competitive.

Discussing Policy Implications of Openness in AI

Policy discussions during the workshop focused on the economic, societal, and regulatory dimensions of openness in AI. These ranged over several themes, including:

  1. Challenging perceptions of openness: There is a clear need to change the narrative around openness, especially in policymaking circles. The open-source community must both act as a community and present itself as knowledgeable and solution-oriented, demonstrating how openness can be a means to advancing the public interest — not an abstract ideal. As one participant pointed out, openness should be viewed as a tool for societal benefit, not as an end in itself.
  2. Tensions between regulation and innovation are misleading: As one of the first regulatory frameworks on AI to be drafted, many people view the EU’s AI Act as a test bed to get to smarter AI regulation. While there is a widespread characterisation of regulation obstructing innovation, some participants highlighted that this can be misleading — many new entrants seek out jurisdictions with favourable regulatory and competition policies that level the playing field.
  3. A changing U.S. Perspective: In the United States, the open-source AI agenda has gained significant traction, particularly in the wake of incidents like the Llama leaks, which showed that many of the feared risks associated with openness did not materialize. Significantly, the U.S. National Telecommunications and Information Administration emphasized the benefits of open source AI technology and introduced a nuanced view of safety concerns around open-weight AI models.

Many participants also agreed that policymakers, many of whom are not deeply immersed in the technicalities of AI, need a clearer framework for understanding the value of openness. Considering the focus of the upcoming Paris AI Summit, some participants felt one solution could lie in focusing on public interest AI. This concept resonates more directly with broader societal goals while still acknowledging the risks and challenges that openness brings.

Recommendations 

Embracing openness in AI is non-negotiable if we are to build trust and safety; it fosters transparency, accountability, and inclusive collaboration. Openness must extend beyond software to broader access to the full AI stack, including data and infrastructure, with a governance that safeguards public interest and prevents monopolization.

It is clear that the open source community must make its voice louder. If AI is to advance competition, innovation, language, research, culture and creativity for the global majority of people, then an evidence-based approach to the benefits of openness, particularly when it comes to proven economic benefits, is essential for driving this agenda forward.

Several recommendations for policymakers also emerged.

  1. Diversify AI Development: Policymakers should seek to diversify the AI ecosystem, ensuring that it is not dominated by a few large corporations in order to foster more equitable access to AI technologies and reduce monopolistic control. This should be approached holistically, looking at everything from procurement to compute strategies.
  2. Support Infrastructure and Data Accessibility: There is an urgent need to invest in AI infrastructure, including access to data and compute power, in a way that does not exacerbate existing inequalities. Policymakers should prioritize distribution of resources to ensure that smaller actors, especially those outside major tech hubs, are not locked out of AI development.
  3. Understand openness as central to achieving AI that serves the public interest. One of the official tracks of the upcoming Paris AI Action Summit is Public Interest AI. Increasingly, openness should be deployed as a main route to truly publicly interested AI.
  4. Openness should be an explicit EU policy goal: As one of the furthest along in AI regulatory frameworks the EU will continue to be a testbed for many of the big questions in AI policy. The EU should adopt an explicit focus on promoting openness in AI as a policy goal.

We will be raising all the issues discussed while at the AI Action Summit in Paris. The organizers hope to host another set of these discussions following the conclusion of the Summit in order to continue working with the community and to better inform governments and other stakeholders around the world.

The list of participants at the Paris Openness Workshop is below:

  • Linda Griffin – VP of Global Policy, Mozilla
  • Udbhav Tiwari – Director, Global Product Policy, Mozilla
  • Camille François – Researcher, Columbia University
  • Tanya Perelmuter – Co-founder and Director of Strategy,, Fondation Abeona
  • Yann Lechelle – CEO, Probabl
  • Yann Guthmann – Head of Digital Economy, Department at the French Competition Authority
  • Adrien Basdevant – Tech lawyer, Entropy Law
  • Andrzej Neugebauer – AI Program Director, LINAGORA
  • Thierry Poibeau – Director of Research, CNRS, ENS
  • Nik Marda – Technical Lead for AI Governance, Mozilla
  • Andrew Strait – Associate Director, Ada Lovelace Institute (UK)
  • Paul Keller – Director of Policy, Open Future (Netherlands)
  • Guillermo Hernandez – AI Policy Analyst, OECD
  • Sandrine Elmi Hersi – Unit Chief of “Open Internet”, ARCEP