In Q4 2020 the EU will propose what’s likely to be the world’s first general AI regulation. While there is still much to be defined, the EU looks set to establish rules and obligations around what it’s proposing to define as ‘high-risk’ AI applications. In advance of that initiative, we’ve filed comments with the European Commission, providing guidance and recommendations on how it should develop the new law. Our filing brings together insights from our work in Open Innovation and Emerging Technologies, as well as the Mozilla Foundation’s work to advance trustworthy AI in Europe.
We are in alignment with the Commission’s objective outlined in its strategy to develop a human-centric approach to AI in the EU. There is promise and the potential for new and cutting edge technologies that we often collectively refer to as “AI” to provide immense benefits and advancements to our societies, for instance through medicine and food production. At the same time, we have seen some harmful uses of AI amplify discrimination and bias, undermine privacy, and violate trust online. Thus the challenge before the EU institutions is to create the space for AI innovation, while remaining cognisant of, and protecting against, the risks.
We have advised that the EC’s approach should be built around four key pillars:
- Accountability: ensuring the regulatory framework will protect against the harms that may arise from certain applications of AI. That will likely involve developing new regulatory tools (such as the ‘risk-based approach’) as well as enhancing the enforcement of existing relevant rules (such as consumer protection laws).
- Scrutiny: ensuring that individuals, researchers, and governments are empowered to understand and evaluate AI applications, and AI-enabled decisions – through for instance algorithmic inspection, auditing, and user-facing transparency.
- Documentation: striving to ensure better awareness of AI deployment (especially in the public sector), and to ensure that applications allow for documentation where necessary – such as human rights impact assessments in the product design phase, or government registries that map public sector AI deployment.
- Contestability: ensuring that individuals and groups who are negatively impacted by specific AI applications have the ability to contest those impacts and seek redress e.g. through collective action.
The Commission’s consultation focuses heavily on issues related to AI accountability. Our submission therefore provides specific recommendations on how the Commission could better realise the principle of accountability in its upcoming work. Building on the consultation questions, we provide further insight on:
- Assessment of applicable legislation: In addition to ensuring the enforcement of the GDPR, we underline the need to take account of existing rights and protections afforded by EU law concerning discrimination, such as the Racial Equality directive and the Employment Equality directive.
- Assessing and mitigating “high risk” applications: We encourage the Commission to further develop (and/or clarify) its risk mitigation strategy, in particular how, by whom, and when risk is being assessed. There are a range of points we have highlighted here, from the importance of context and use being critical components of risk assessment, to the need for comprehensive safeguards, the importance of diversity in the risk assessment process, and that “risk” should not be the only tool in the mitigation toolbox (e.g. consider moratoriums).
- Use of biometric data: the collection and use of biometric data comes with significant privacy risks and should be carefully considered where possible in an open, consultative, and evidence-based process. Any AI applications harnessing biometric data should conform to existing legal standards governing the collection and processing of biometric data in the GDPR. Besides questions of enforcement and risk-mitigation, we also encourage the Commission to explore edge-cases around biometric data that are likely to come to prominence in the AI sphere, such as voice recognition.
A special thanks goes to the Mozilla Fellows 2020 cohort, who contributed to the development of our submission, in particular Frederike Kaltheuner, Fieke Jansen, Harriet Kingaby, Karolina Iwanska, Daniel Leufer, Richard Whitt, Petra Molnar, and Julia Reinhardt.
This public consultation is one of the first steps in the Commission’s lawmaking process. Consultations in various forms will continue through the end of the year when the draft legislation is planned to be proposed. We’ll continue to build out our thinking on these recommendations, and look forward to collaborating further with the EU institutions and key partners to develop a strong framework for the development of a trusted AI ecosystem. You can find our full submission here.