CAPTCHA successor Privacy Pass has no easy answers for online abuse
As much as the Web continues to inspire us, we know that sites put up with an awful lot of abuse in order to stay online. Denial of service attacks, fraud and other flavors of abusive behavior are a constant pressure on website operators.
One way that sites protect themselves is to find some way to sort “good” visitors from “bad.” CAPTCHAs are a widely loathed and unreliable means of distinguishing human visitors from automated solvers. Even worse, beneath this sometimes infuriating facade is a system that depends extensively on invasive tracking and profiling.
(You can find a fun overview of the current state of CAPTCHA here.)
Finding a technical solution to this problem that does not involve such privacy violations is an appealing challenge, but a difficult one. Well-meaning attempts can easily fail without giving due consideration to other factors. For instance, Google’s Web Environment Integrity proposal fell flat because of its potential to be used to unduly constrain personal choice in how to engage online (see our position for details).
Privacy Pass is a framework published by the IETF that is seen as having the potential to help address this difficult problem. It is a generalization of a system originally developed by Cloudflare to reduce their dependence on CAPTCHAs and tracking. For the Web, the central idea is that Privacy Pass might provide websites with a clean indication that a visitor is OK, separate from the details of their browsing history.
The way Privacy Pass works is that one website hands out special tokens to people the site thinks are OK. Other sites can ask people to give them a token. The second site then knows that a visitor with a token is considered OK by the first site, but they don’t learn anything else. If the second site trusts the first, they might treat people with tokens more favorably than those without.
The cryptography that backs Privacy Pass provides two interlocked guarantees:
- authenticity: the recipient of a token can guarantee that it came from the issuer
- privacy: the recipient of the token cannot trace the token to its issuance, which prevents them from learning who was issued each token
The central promise of Privacy Pass is that the privacy guarantee would allow the exchange of tokens to be largely automated, with your browser forwarding tokens between sites that trust you to sites that are uncertain. This would happen without your participation. Sites could use these tokens to reduce their dependence on annoying and ineffective CAPTCHAs.
Our analysis of Privacy Pass shows that while the technology is sound, applying that technology to an open system like the Web comes with a host of non-technical hazards.
We examine the privacy properties of Privacy Pass, how useful it might be, whether it could improve equity of access, and whether it might bias toward centralization. We find problems that aren’t technical in nature and hard to reconcile.
In considering how Privacy Pass might be deployed, there is a direct tension between privacy and open participation. The system requires token providers to be widely trusted to respect privacy, but our vision of an open Web means that restrictions on participation cannot be imposed lightly. Resolving this tension is necessary when deciding who can provide tokens.
The analysis concludes that the problem of abuse is not one that will yield to a technical solution like Privacy Pass. For a problem this challenging, technical options might not provide a comprehensive solution, but they need to do more than shift problems around. Technical solutions need to complement other measures. Privacy Pass does allow us to focus on the central problem of identifying abusive visitors, but there is a need to have safeguards in place that prevent a number of serious secondary problems.
Our analysis does not ultimately identify a path to building the non-technical safeguards necessary for a successful deployment of Privacy Pass on the Web.
Finally, we look at the deployments of Privacy Pass in Safari and Chrome browsers. We conclude that these deployments have inadequate safeguards for the problems we identify.