What happens when AI systems fail? Who should be held responsible when they cause harm? And how can we ensure that people harmed by AI can seek redress?
As AI is increasingly integrated in products and services across sectors, these questions will only become more pertinent. In the EU, a proposal for an AI Liability Directive (AILD) in 2022 catalyzed debates around this issue. Its recent withdrawal by the European Commission leaves a wide range of open questions to linger as businesses and consumers will need to navigate fragmented liability rules across the EU’s 27 member states.
To answer these questions, policymakers will need to ask themselves: what does an effective approach to AI and liability look like?
New research published by Mozilla tackles these thorny issues and explores how liability could and should be assigned across AI’s complex and heterogeneous value chain.
Solving AI’s “problem of many hands”
The report, commissioned from Beatriz Botero Arcila — a professor at Sciences Po Law School and a Faculty Associate at Harvard’s Berkman Klein Center for Internet and Society — explores how liability law can help solve the “problem of many hands” in AI: that is, determining who is responsible for harm that has been dealt in a value chain in which a variety of different companies and actors might be contributing to the development of any given AI system. This is aggravated by the fact that AI systems are both opaque and technically complex, making their behavior hard to predict.
Why AI Liability Matters
To find meaningful solutions to this problem, different kinds of experts have to come together. This resource is designed for a wide audience, but we indicate how specific audiences can best make use of different sections, overviews, and case studies.
Specifically, the report:
- Proposes a 3-step analysis to consider how liability should be allocated along the value chain: 1) The choice of liability regime, 2) how liability should be shared amongst actors along the value chain and 3) whether and how information asymmetries will be addressed.
- Argues that where ex-ante AI regulation is already in place, policymakers should consider how liability rules will interact with these rules.
- Proposes a baseline liability regime where actors along the AI value chain share responsibility if fault can be demonstrated, paired with measures to alleviate or shift the burden of proof and to enable better access to evidence — which would incentivize companies to act with sufficient care and address information asymmetries between claimants and companies.
- Argues that in some cases, courts and regulators should extend a stricter regime, such as product liability or strict liability.
- Analyzes liability rules in the EU based on this framework.
Why Now?
We have already seen examples of AI causing harm, from biased automated recruitment systems to predictive AI tools used in public services and law enforcement generating faulty outputs. As the number of such examples will increase with AI’s diffusion across the economy, affected individuals should have effective ways of seeking redress and justice — as we have already argued in our initial response to the AILD proposal in 2022 — and businesses should be incentivized to take sufficient measures to prevent harm. At the same time, they should not be overburdened with ineffective rules and have legal certainty rather than facing a patchwork of varying rules across different jurisdictions in which they operate. A well-designed, targeted, and robust liability regime for AI could address all of these challenges — and we hope the research released today can contribute to a more grounded debate around this issue.