Abstract illustration with large 'AI' letters in the center, surrounded by circular patterns and geometric shapes, set against a soft gradient background of pink, purple, and blue hues with the Mozilla logo in the corner.

Mozilla’s research: Unlocking AI for everyone, not just Big Tech

Artificial intelligence (AI) is shaping how we live, work and connect with the world. From chatbots to image generators, AI is transforming our online experiences. But this change raises serious questions: Who controls the technology behind these AI systems? And how can we ensure that everyone — not just traditional big tech — has a fair shot at accessing and contributing to this powerful tool?

To explore these crucial issues, Mozilla commissioned two pieces of research that dive deep into the challenges around AI access and competition: “External Researcher Access to Closed Foundation Models” (commissioned from data rights agency AWO) and “Stopping Big Tech From Becoming Big AI” (commissioned from the Open Markets Institute). These reports show how AI is being built, who’s in control and what changes need to happen to ensure a fair and open AI ecosystem.

Why researcher access matters

“External Researcher Access to Closed Foundation Models” (authored by Esme Harrington and Dr. Mathias Vermeulen from AWO) addresses a pressing issue: independent researchers need better conditions for accessing and studying the AI models that big companies have developed. Foundation models — the core technology behind many AI applications — are controlled mainly by a few major players who decide who can study or use them.

What’s the problem with access?

  • Limited access: Companies like OpenAI, Google and others are the gatekeepers. They often restrict access to researchers whose work aligns with their priorities, which means independent, public-interest research can be left out in the cold.
  • High-end costs: Even when access is granted, it often comes with a hefty price tag that smaller or less-funded teams can’t afford.
  • Lack of transparency: These companies don’t always share how their models are updated or moderated, making it nearly impossible for researchers to replicate studies or fully understand the technology.
  • Legal risks: When researchers try to scrutinize these models, they sometimes face legal threats if their work uncovers flaws or vulnerabilities in the AI systems.

The research suggests that companies need to offer more affordable and transparent access to improve AI research. Additionally, governments should provide legal protections for researchers, especially when they are acting in the public interest by investigating potential risks.

A glowing network pyramid with nodes connected by lines, emerging from an illuminated web beneath the surface, symbolizing interconnected communication and data flow.

External researcher access to closed foundation models

Read the paper

AI competition: Is big tech stifling innovation?

The second piece of research (authored by Max von Thun and Daniel Hanley from the Open Markets Institute) takes a closer look at the competitive landscape of AI. Right now, a few tech giants like Microsoft, Google, Amazon, Meta and Apple are building expansive ecosystems which set them up to dominate various parts of the AI value chain. And a handful of companies control most of the key resources needed to develop advanced AI, including computing power, data and cloud infrastructure. The result? Smaller companies and independent innovators are being squeezed out of the race from the start.

What’s happening in the AI market?

  • Market concentration: A small number of companies have a stranglehold on key inputs and distribution in  AI. They control the data, computing power and infrastructure everyone else needs to develop AI.
  • Anticompetitive tie-ups: These big players buy up or do deals with smaller AI startups which often evade traditional competition controls. This can stop these smaller companies challenging big tech and prevents others from competing on an even playing field.
  • Gatekeeper power: Big Tech’s control over essential infrastructure — like cloud services and app stores — allows them to set unfair terms for smaller competitors. They can charge higher fees or prioritize their products over others.

The research calls for strong action from governments and regulators to avoid recreating the same market concentration we have seen in digital markets over the past 20 years . It’s about creating a level playing field where smaller companies can compete, innovate and offer consumers more choices. This means enforcing rules to prevent tech giants from using their platforms to give their AI products an unfair advantage. It also ensures that critical resources like computing power and data are more accessible to everyone, not just big tech.

A futuristic chessboard with blue and purple tones, featuring connected glowing lines between pieces, including a queen, pawns, a knight, and a rook, suggesting a network or strategy concept.

Stopping Big Tech From Becoming Big AI

Read the paper

Why this matters

AI has the potential to bring significant benefits to society, but only if it’s developed in a way that’s open, fair and accountable. Mozilla believes that a few powerful corporations shouldn’t determine the future of AI. Instead, we need a diverse and vibrant ecosystem where public-interest research thrives and competition drives innovation and choice – including from open source, public, non-profit and private actors.

The findings emphasize the need for change. Improving access to foundation models for researchers and addressing the growing concentration of power in AI can help ensure that AI develops in ways that benefit all of us — not just the tech giants.

Mozilla is committed to advocating for a more transparent and competitive AI landscape; this research is an essential step toward making that vision a reality. 


Share on Twitter