fingerprint and closed lock

Analysis of Google’s Privacy Budget Proposal

Fingerprinting is a major threat to user privacy on the Web. Fingerprinting uses existing properties of your browser like screen size, installed add-ons, etc. to create a unique or semi-unique identifier which it can use to track you around the Web. Even if individual values are not particularly unique, the combination of values can be unique (e.g., how many people are running Firefox Nightly, live in North Dakota, have an M1 Mac and a big monitor, etc.)

This post discusses a proposal by Google to address fingerprinting called the Privacy Budget. The idea behind the Privacy Budget is to estimate the amount of information revealed by each piece of fingerprinting information (called a “fingerprinting surface”, e.g., screen resolution) and then limit the total amount of that information a site can obtain about you. Once the site reaches that limit (the “budget”), further attempts to learn more about you would fail, perhaps by reporting an error or returning a generic value. This idea has been getting a fair amount of attention and has been proposed as a potential privacy mitigation in some in-development W3C specifications.

While this seems like an attractive idea, our detailed analysis of the proposal raises questions about its feasibility.  We see a number of issues:

  • Estimating the amount of information revealed by a single surface is quite difficult. Moreover, because some values will be much more common than others, any total estimate is misleading. For instance, the Chrome browser has many users and so learning someone uses Chrome is not very identifying; by contrast, learning that someone uses Firefox Nightly is quite identifying because there are few Nightly users.
  • Even if we are able to set a common value for the budget, it is unclear how to determine whether a given set of queries exceeds that value. The problem is that these queries are not independent and so you can’t just add up each query. For instance, screen width and screen height are highly correlated and so once a site has queried one, learning the other is not very informative.
  • Enforcement is likely to lead to surprising and disruptive site breakage because sites will exceed the budget and then be unable to make API calls which are essential to site function. This will be exacerbated because the order in which the budget is used is nondeterministic and depends on factors such as the network performance of various sites, so some users will experience breakage and others will not.
  • It is possible that the privacy budget mechanism itself can be used for tracking by exhausting the budget with a particular pattern of queries and then testing to see which queries still work (because they already succeeded).

While we understand the appeal of a global solution to fingerprinting — and no doubt this is the motivation for the Privacy Budget idea appearing in specifications — the underlying problem here is the large amount of fingerprinting-capable surface that is exposed to the Web. There does not appear to be a shortcut around addressing that. We believe the best approach is to minimize the easy-to-access fingerprinting surface by limiting the amount of information exposed by new APIs and gradually reducing the amount of information exposed by existing APIs. At the same time, browsers can and should attempt to detect abusive patterns by sites and block those sites, as Firefox already does.

This post is part of a series of posts analyzing privacy-preserving advertising proposals.

For more on this:

Building a more privacy-preserving ads-based ecosystem

The future of ads and privacy

Privacy analysis of FLoC

Mozilla responds to the UK CMA consultation on google’s commitments on the Chrome Privacy Sandbox

Privacy analysis of SWAN.community and Unified ID 2.0


Share on Twitter