On December 10th, Australia’s controversial law banning access for under 16-year-olds to certain social media platforms entered into force. Since its adoption in 2024, the law has sparked a global debate on age verification online and has inspired governments across the world to restrict minors’ access to parts of the web.
At Mozilla, privacy and user empowerment have always formed a core part of our mission. Mozilla supports strong, proportionate safeguards for minors, but we caution against approaches that rely on invasive identity checks, surveillance-based enforcement, or exclusionary defaults. Such interventions rely on the collection of personal and sensitive data and, thus, introduce major privacy and security risks. By following an approach of abstinence and access control, they undermine the rights of young people to express themselves online but do little to address the child safety risks policymakers might seek to address, such as insufficient content moderation, irresponsible data practices, and addictive design.
Rather than simply restricting access to some online platforms, policymakers should focus on fixing the systemic issues at play and incentivize the creation of online spaces that benefit young people and their development.
We are therefore disappointed by the blunt and disproportionate approach taken by the Australian government. We are also concerned about the impact this law, and others like it, will have on online privacy and security, on people’s ability to express themselves and access information, and therefore on the health of the web itself.
The Australian law designates certain services as “age-restricted social media platforms”. This category includes social media platforms like Instagram and TikTok, and video-sharing platforms like YouTube, and excludes certain categories of services, such as messaging providers, email services, and online games. Designated services must ensure that people under 16 years of age do not have accounts on their platforms. To do so, the ages of all users must be determined.
The Australian law provides almost no guidance on how service providers should balance privacy, security, and the robustness of age assurance technologies when performing age checks. Providers are thus left to choose from bad options. In the UK, a similar approach has resulted in users having to entrust some of their most sensitive data to a plethora of newly emerged commercial age assurance providers in order to retain access to the various services they use. These actors often ask for a lot of information while providing little accountability or transparency about their data handling practices. Beyond serious data breaches, this has also led to users losing access to messaging features and the censorship of content deemed sensitive, such as posts about the situation in Gaza or the war in Ukraine. But UK users have also demonstrated how ineffective the age-gating mechanisms of even some of the largest platforms are, using VPNs and video game features to bypass age barriers easily.
While many technologies exist to verify, estimate, or infer users’ ages, fundamental tensions around effectiveness, accessibility, privacy, and security have not been resolved. Rather, the most common forms of age assurance technologies all come with their own significant limitations:
- Age estimation refers to AI-based systems that estimate a user’s age, usually based on biometric data like facial images. They may perform well in placing users within broad age bands, but often struggle at key legal thresholds, such as distinguishing between 15 and 16 years old. More troubling are equity concerns: facial estimation systems often underperform for people with darker skin tones, women, and those with non-binary or non-normative facial features due to biased or limited training datasets.
- Age inference models are based on vast amounts of user data, such as browsing histories, to infer a user’s age. Similar limitations as for biometric age estimation apply: determining a user’s exact age along legal thresholds is challenging, and users exhibiting unusual behaviors might be profiled as younger or older than they are.
- Age verification usually refers to verifying someone’s age by comparing it to a form of government-issued ID. This approach might lead to more precise outcomes, but risks excluding millions of people without access to government ID – many of them minors themselves. It also forces people to share some of their most sensitive data with private companies, where it will be at risk of surveillance, repurposing, or access by law enforcement. Zero-knowledge proofs (ZKPs) – a cryptographic way to prove whether a statement like “I am older than 18” is true without revealing one’s exact age – can help people limit what information they share. Deploying a ZKP-based system that meets goals for veracity and privacy requires considerably more development in both technical and governance aspects than any government has been willing to support. Beyond technical investment, clear frameworks to limit the information companies can collect are needed.
The Australian approach sends a worrying signal: That mandatory age verification and blanket bans are magical solutions to complex societal challenges, regardless of their implications for fundamental rights online. We are convinced, however, that there are rights-respecting alternatives policymakers can pursue to empower young people online and improve their safety and well-being:
- Young people have a right to privacy, safety and expression, as put forward by the UN Convention on the Rights of the Child. Policymakers should adopt a child rights-based approach to online safety that balances young people’s protection with their rights to societal participation, free expression, and access to media and information, and should adopt policies to allow young people to benefit from responsibly run online services.
- Blanket age-based bans and mandatory age verification should be rejected. Instead, parents and caregivers should be empowered to set age-appropriate limitations for their child on their devices. Policymakers should implement programs that focus on supporting children by providing them with the tools to learn to manage online risks as they grow and learn. This includes developing tools that parents, guardians, and schools can use in teaching and supervision in support of children as they grow.
- Rather than banning young people from accessing certain platforms, policymakers should create incentives, enforce existing laws and close regulatory gaps where necessary to address problematic practices that put all social media users’ privacy, wellbeing, and security at risk. This includes extractive data practices and profiling, manipulative advertising, addictive design and dark patterns, and other harmful practices.
In Australia and elsewhere, we are committed to work alongside policymakers to advance meaningful protections for everyone online, while upholding fundamental rights, accessibility and user choice.
With special thanks to Martin Thomson, Distinguished Engineer at Mozilla, for his contributions to this blog.