Categories: Firefox

Retrospective: Looking Glass

In December, we launched a tv show tie-in with Mr. Robot, Looking Glass, that alarmed some people because we didn’t think hard enough about the implications of shipping an add on that had the potential to be both confusing and upsetting. We’re deeply sorry for this and we understand why it’s important for us to learn and grow from this experience. As mentioned last month, we conducted a post-mortem to better understand how and why this happened and how we can do better.

The amount of valid and well-reasoned feedback we received from community members and users shows that we need to take action to make sure this isn’t going to happen again.

The experiments platform we used to deploy Looking Glass, also known as SHIELD, is used to test many things, from simple configuration changes to potential new features, and we measure the effects of those changes in a privacy preserving way. This platform helps us make decisions on new product features, evaluate whether or not a technology update is stable, and generally helps us make sure that we can make good decisions in a responsible way. The team has invested time and energy to ensure that we are always clear and consistent about the kind of information we will capture in our studies.

Since the Looking Glass experience did not capture any data, it passed our internal privacy review. After our post-mortem, it was clear that this was part of the problem. A valid experiment always captures data to answer questions about small changes we make to Firefox as part of our testing. An ‘experiment’ that does not capture any data is not an experiment at all. A key learning here is that we need to better codify the use of SHIELD to make sure we are always using the platform as intended, to conduct experiments to measure potential changes to Firefox.

To clarify our intentions we have created a set of principles that we will always follow when shipping a SHIELD study to our users, and two principles are most relevant to this situation.

A SHIELD study must be designed to answer a specific question.

We evaluated Looking Glass based on whether or not it upheld user privacy. Since it did not collect any data, we felt that it was safe. In retrospect, not capturing data was a strong indicator that this was not a good SHIELD study candidate, so we’re making sure we’re going to specifically evaluate future studies based on this criteria to ensure that we don’t repeat our mistake.

A SHIELD study must always be named accurately.

We were deliberately misleading in the naming of this add-on. The intentions were to preserve the surprise and delight of users participating in the Mr Robot Alternate Reality Game, but it also violated our own advice for users, particularly where it pertains to recognizing malware.

The remainder of the principles are published on our wiki , and moving forward, it will be the responsibility for anyone publishing a SHIELD study to review the release against our set of published principles.

If a study doesn’t meet the standards outlined by our principles, it won’t get shipped, and to ensure that we’re always adhering to these principles, we’re developing processes with the team to ensure review from a broad set of people.

-By Nick Nguyen.