Firefox Data

Improving privacy without breaking the web

First: thank you to our passionate and active Firefox users who participated in this shield study!

tl;dr – The Firefox Privacy team ran a user research study to learn how privacy protections affect users on websites. We learned some surprising things. There were 19,000 users and 8 variations of behavior within the experiment. We built an opt-in study to measure breakage data, we unblocked some existing privacy features, and we learned some new potential areas to improve privacy in the future. And as a result, we’re adding more privacy protection to Firefox:

  1. In Firefox Quantum, all users can enable Tracking Protection for their regular browsing
  2. In Firefox 59+, Private Browsing will default to trimming Referer values to origins

(Note: You can also see the full presentation of these results)

Existing Knowledge, Assumptions, and Questions

For over a decade, Mozilla has been building privacy protections for Internet users. From Firefox desktop, to Firefox mobile and specialty browsers, to private encrypted web services like Send, we continuously strive to learn how to improve privacy technology across the web.

Recently, the Firefox Telemetry and Data platform helped us answer some long-standing questions for Firefox desktop privacy:

  • Does Tracking Protection break websites?
  • Do broken websites make users leave Firefox?
  • Are there existing privacy protections we could enable with minimal web breakage?

The shield study add-on

To help answer these questions, we built an opt-in shield study. We placed each user into one of nine branches of the study. Each branch corresponded to an existing Firefox privacy protection.

  • Control
    No changes
  • sessionOnlyThirdPartyCookies
    When the user closes Firefox, Firefox deletes third-party cookies.
  • noThirdPartyCookies
    Firefox disables all third-party cookies.
  • thirdPartyCookiesOnlyFromVisitedFirefox does not send third-party cookies to a site unless the user directly visited the site in the past.
  • trackingProtection
    Activates tracking protection in regular browsing windows.
  • originOnlyRefererToThirdParties
    Trim requests’ Referer values to origins when sent to third parties.
  • resistFingerprinting
    Activates Firefox’s fingerprinting protections.
  • firstPartyIsolation
    Activates First-party Isolation.
  • firstPartyIsolationOpenerAccess
    Activates First-party Isolation, but allows pages to access openers.

Once a user was placed into a branch, we gave them a new browser toolbar icon to report problems. See the full presentation for a screenshot flow of the add-on experience.

The numbers

Over 19,000 users opted into the study, which gave us more than 2,100 users in each branch of the study, and over 8,500 active users on the most active day of the study.

Pie chart of users in each branch

2,100+ users in each branch

 

Chart of active users per day

Up to 8,500 active users per day

Measuring breakage

To quantify web breakage, we analyzed the data by 3 primary dimensions:

  • % of users who reported at least one problem
  • Average number of problems reported per user
  • % of users who disable the study (presumably because of problems)

We also analyzed the types of breakage, and those details are available in the full presentation of the results of the study.

Tracking Protection actually reduces problems

Firefox has had Tracking Protection built into its Private Browsing Mode since 2015. Tracking Protection blocks all third-party connections to domains on Disconnect‘s Tracking Protection block-list. We know that this breaks some websites where the code relies on the third-party resources. (We have a bug tree and a long list of webcompat.com issues for the Firefox feature, and we ran a Test Pilot experiment with the same block-list.)

In this study, we measured and compared breakage caused by Tracking Protection to a control group, and to breakage caused by other protections. Which led to our first surprising result …

Chart of average problems per user

The average problems reported per user of Tracking Protection was lower than the control group.

 

When we saw this, we dug into users’ comments to learn why. We saw a trend among the comments from users in the control group: “not responsive”, “slow”, “freezing”, “took longer to load”, “not always responding”, “laggy”, “doesn’t load fast” … and the comment that seemed to sum it all up:

Something on the page is slowing down the loading speed significantly.

Our finding here matches what web performance guru’s have been saying for years: third-party scripts cause a large number of performance problems. Tracking Protection removes them completely, so the number of problems is reduced. So, in a sense, Tracking Protection may actually fix websites by blocking tracking elements that break (i.e., slow) them down.

Do broken websites make users leave Firefox?

Privacy & Security engineers have long understood: “without usable systems, the security and privacy simply disappears“. Firefox’s privacy protections must be usable on the web, or people will simply stop using Firefox altogether. While we could not measure the number of users who stopped using Firefox, we did measure the number of users who disabled the study.

Unsurprisingly, some privacy protections caused significantly more users to disable the study than others.

Chart of % of users who disabled the study

Significantly more users disabled resistFingerprinting and firstPartyIsolation branches of the study.

 

Surprisingly, though, the % of users disabling the study was low across all branches: between 5.7% minimum and 9.7% maximum. Furthermore, the % of users who disabled Tracking Protection, Origin-only Referer values to third parties, and any of the cookie protections were within the margin-of-error of the control group. This result indicates that, overall, many privacy protections don’t appear to break the web so much that users will disable them.

However, we did analyze the kinds of breakage that users reported, and we learned some specific broken websites and specific broken features that correlated to more users disabling the study. The details are available in the full presentation. In short, breaking “workflow” sites and features caused more people to disable the study.

Are there existing privacy protections we could enable with minimal web breakage?

To learn which branches of privacy protection were associated with the least overall breakage, we looked at each of our three dimensions to see which protections fell within a margin of error of the control group.

% of users reporting at least 1 problem: 6 protections are within the margin of error of the control group

% of users reporting at least 1 problem; 6 protections are within the margin of error of the control group

 

average problems per user: 4 protections are within the margin of error of the control group

Average problems reported per user; 4 protections are within the margin of error of the control group

 

% of users who disabled the study: 5 protections are within the margin of error of the control group

% of users who disabled the study; 5 protections are within the margin of error of the control group

 

We created a simple “composite breakage score” that multiplied these three dimensions together for a consolidated comparison. The graph below is a view of the data that emphasizes the relative differences.

"Composite Breakage Score" for each privacy protection

By this comparison, the most promising protections, in terms of lowest overall breakage were:

  1. Origin-only Referer values to third parties
  2. Session-only third-party cookies
  3. Tracking Protection

Data turns into action

After this study concluded, we presented the results to a number of teams, and we’re happy that a couple of strong decisions and actions are already made and underway.

  1. In Firefox Quantum, all users can enable Tracking Protection for their regular browsing
  2. In Firefox 59+, Private Browsing will default to trimming Referer values to origins

In conclusion, we built an opt-in study to measure breakage data, we unblocked some existing privacy features, and we learned some new potential areas to improve privacy in the future. We look forward to using more data to improve privacy on the web.

Add-on recommendations for Firefox users: a prototype recommender system leveraging existing data sources

By: Alessio Placitelli, Ben Miroglio, Jason Thomas, Shell Escalante and Martin Lopatka.
With special recognition of the development efforts of Roberto Vitillo who kickstarted this project, Mauro Doglio for massive contributions to the code base during his time at Mozilla, Florian Hartmann, who contributed efforts towards prototyping the ensemble linear combiner, Stuart Colville for coordinating integration with AMO. Last, but not least, to Anthony Miyaguchi who helped shaping the current code thanks to his reviewing efforts.

What’s TAAR?
Firefox has a robust ecosystem of add-ons that can enhance the browsing experience, but all users are different and not all add-ons are right for everyone. The TAAR project (Telemetry Aware Addon Recommender) is an experimental product developed over the course of 2017 to provide a personalized experience for Firefox users seeking to install add-ons based on available information already in Mozilla’s telemetry data. Our aim is to provide potentially interesting add-ons or useful replacements to add-ons built on legacy technology, without the need for additional data collection. Add-ons created with the new standard (after legacy) are safer, more secure, and won’t break in new Firefox releases.
Unlike conventional recommender systems, we designed TAAR to provide interesting add-on recommendations based on the Telemetry data Firefox collects in accordance to Mozilla’s Data Privacy Principles and privacy policy. The data contains, among other things, browser performance data and an hardware overview. This information is collected from Firefox desktop and can be disabled if users choose to do so. Retrofitting an existing data source to a new application is no easy task, but we are really happy with TAAR’s functionality in leveraging different information sources based on availability to provide a personalized add-on recommendation list.

Design philosophy
If there is anything the Netflix prize has taught the world about recommender systems, it is the importance of contextual information in practical recommendation ranking. In developing the TAAR system, we had additional constraints around the information available pertaining to clients’ interaction with the add-ons ecosystem.

Firstly, only 40% of Firefox users have an add-on installed and enabled. While existing add-on installations can be a very powerful predictor of add-on interest, there are a number of known shortcomings typical of collaborative filter type recommendations in terms of cold start, diversity and the long tail, and general lack of information for clients without currently installed add-ons. For this reason a fall-through approach was implemented to leverage information sources in order of their expected predictive value and based on the availability of information.

How it’s being deployed?
Building a new feature into Firefox always triggers the questions: will it work and will our users find it useful? To answer these we decided to let a fraction of the Firefox users on the Release channel to try it out through a SHIELD study: if user is enrolled in this study a modified about:addons page might be served incorporating recommendations from TAAR.

How does it work?
When the user opens the TAAR-enabled about:addons page, Firefox fetches the page content from discovery.addons.mozilla.org (AMO frontend). The client id is sent along with the page request and forwarded to the taar-api endpoint.

The TAAR library is called right after the request is parsed and validated. It queries our backend services to look up the most recent data for the client given the provided client id. This data is eventually passed to the other TAAR components to produce relevant add-on recommendations, which are returned to the browser as a list of add-on GUIDs which are rendered on the about:addons page.

A description of the TAAR system workflow

How does TAAR work?
The TAAR system is made up of 3 main components:

  • the profile fetcher, which is responsible of looking up the latest data about the client given its id;
  • the recommendation modules, each one implementing a different recommendation model;
  • the recommendation manager, implementing a recommendation strategy and calling the relevant recommendation modules.

The current version of TAAR implements a basic recommendation manager which executes a simple recommendation strategy: once a recommendation is requested for a specific client id, the recommender iterates through all the registered recommendation modules linearly in their registered order. The first module that can perform a recommendation will return its results.

Each recommendation module defines its own, independent sets of requirements and exposes a single function that the manager can use to verify that a recommendation can be performed by that module. At the time of writing this post, we have four different TAAR modules: legacy, collaborative, similarity and locale.

To make sure recommendations are computed quickly, the heavy lifting for each recommendation module is performed off-line, in a series of Python ETL jobs that are scheduled to run weekly, on Monday.

The legacy model
This is the first model our recommendation strategy tries to recommend with. Its objective is to suggest add-ons to those users that have legacy add-ons installed (even if these are disabled).

On the backend, a weekly job makes use of the add-ons replacement API to build a dictionary of recommendations for each legacy add-on. The resulting file is shared with TAAR and consumed by the recommender.

The collaborative model
The collaborative filtering model attempts to perform relevant recommendations by analysing which add-ons similar users like. This model requires that each user has at least one installed add-on: the underlying assumption is that users with similar add-ons might have similar add-on interests.

The core of this approach is the recommender weekly job which performs the following basic tasks: it loads the list of valid add-ons from AMO, builds a users/add-ons matrix and perform matrix factorization. Since every user does not install every single add-on, the initial matrix is sparse by definition. As a consequence, the purpose of the last step is to find an approximate matrix that contains a confidence value related to the strength of each add-on recommendation for every user.

The resulting approximation is then used by TAAR module to perform the final recommendation: for a given list of add-ons coming from the requesting user, the closest row is picked from the matrix. The add-ons with the highest confidence value from this row are then returned and recommended to the user.

While this approach is intuitive and proved to work at scale and in production, it’s main drawback is that it suffers from the cold start problem: new users might receive bad recommendations and new add-ons might not get recommended. The first part of the problem is solved by requiring at least one add-on to be installed. The second problem is mitigated by using the other recommendation modules.

The similarity-based model
The similarity based recommendation model aims to identify candidate clients that may be similar enough to a new client that the philosophy of a collaborative filter can be extended to the independent feature space of other telemetry variables. Pairwise similarity in a subspace of the Firefox telemetry features has been investigated for its predictive value in terms of add-on installation likelihood. We began by identifying a set of candidate add-on donors. In order to ensure that a diverse sample of add-ons was represented in our candidate donors, we applied a bisecting K-means clustering algorithm (a form of divisive clustering) utilizing only the vector representation of installed add-ons to derive a number of clusters encompassing a diverse sample of add-on installations.

From the non-add-on telemetry variables. Donor clients belonging to the same add-on cluster are deemed to be similar in terms of their add-on preferences and similarity scores observed between same cluster clients are pooled as “same cluster” distances. Likewise, distances computed between clients in different add-on clusters are accumulated in a list of “different cluster” observed distances to generalise the relationship between add-on similarity and telemetry similarity.

The generalisation of pairwise distances computed for in-group and out-group relationships allows us to specify a model comparing the probability of observing a particular similarity (here synonymous to a specified distance metric) under one of two assumptions (same add-on cluster versus different add-on cluster). This can be represented as a general likelihood ratio model that gives us a very natural quantity pertaining to the chances that an add-on (taken from the pool of those installed by an add-on donor) may be interesting for a new client with a particular similarity to that client donor.

The corresponding ETL job can be seen here and the recommender module itself is implemented in the core TAAR library.

Candidate add-on donors are sampled weekly ensuring continuously fresh sampling of the add-ons ecosystem and allowing the possibility of new pattern discovery and the inclusion of new add-ons in the recommendation pool.

The similarity recommender module utilises the likelihood ratio model (computed weekly) to compare an incoming client with a set of (weekly resampled) donors to generate a ranked list of add-on recommendations.

The locale model
This is the last model that gets called, and the last attempt we make to give user a reasonable add-on recommendation. This model relies on user’s locale in order to recommend the relevant add-ons in that geographical area.

The locale ETL job computes the most reported add-ons for all our known locales. However, to preserve user’s privacy, we don’t recommend add-ons in locales for which we don’t have enough data. This mitigates the risk of recommending add-ons that could trace back to a small group of individuals.

The first TAAR study
Shield is a Firefox user testing platform that allows us to try out and evaluate new features through experimentation, or what we call Shield Studies. Common applications of Shield Studies include changing preferences, displaying messaging or distributing surveys.

To test the efficacy of TAAR, we ran an opt-out shield study on the Firefox Release channel. For half the users enrolled in the study, we changed the URL for the Add-ons Discovery Pane, which loads when a user types about:addons into the address bar or clicks on “Add-ons” from the Firefox menu.

TAAR v1.0 study design

The altered URL tells AMO to send the user’s client id to TAAR and load recommendations into the page, if possible. Within each test group we exposed half to a notification prompting the user to “try add-ons” with a link to the about:addons page, thus creating 4 distinct groups with equal probabilities (0.25) of assignment. We tried to focus on new users for the study, to get a better understanding of clients’ entry into the add-ons ecosystem between our prompted and un-prompted study branches.

Client view of the about:addons page. Recommendations were inserted in the sections outlined in red

We focused on two measures in this shield study:

  • installation rate (proportion of clients that installed 1+ add-on(s));
  • installs per client (number of installs / number of clients).

We came away with three main findings from this study:

  • clients exposed to a notification are more likely to install an add-on (installation rate +17.19%);
  • clients that see TAAR recommendations are not more likely to install an add-on (installation rate +~0%, small effect, but not statistically significant);
  • clients that see TAAR recommendations are more likely to install a larger number of add-ons (installs per client  +1.4%).

An additional data source available in the analysis following the first TAAR study was the corpus of application log files generated by the TAAR library itself. A slightly deeper dive into the findings regarding the number of add-on installations throughout the course of the study revealed an interesting narrative. Clients initially served by the similarity-based model were frequently seen later, with around 18% returning to the TAAR service and being served by the collaborative model.

Lessons learned
Firstly, we learned that the awesome power of the SHIELD platform allows us to deploy large-scale studies seamlessly in the release population. It was a great experience to deploy a Firefox service via SHIELD in this manner.

A slightly harder lesson, is that clients’ attention is very difficult to steer. Of the 1.2 million users initially enrolled in this study a mere 3.5% interacted specifically with the TAAR library during the study period. Perhaps more obvious signaling could be considered, but we also wanted to avoid disruptive or overly aggressive disruption of the default Firefox user experience.

In addition we found that clients are very polarized in terms of how they like to interact with the add-ons ecosystem. Only around 12% of clients interacting (not necessarily leading to an add-on installation) with the add-on ecosystem were observed to visit both about:addons and addons.mozilla.org; meaning that ~88% of users exclusively interested with one of those two locations, perhaps indicating that many users are not aware of the two channels by which the AMO servers can be reached.

In conjunction with the TAAR study, we have continued to tinker with the TAAR library as a project in active development. Our initial predictions regarding client eligibility for each of the recommendation modules proved to be pretty close to the mark. But an interesting discovery was that in the live add-ons ecosystem, interacting with real clients the modules exhibit differences in their likely recommendations.

Future directions
We learned several valuable lessons in the first TAAR study. But we also were forced to think about the efficacy of the recommendations in a very pragmatic manner. In terms of maximizing the rate of installation we are tempted to recommend the (globally) most popular add-ons, especially to new users. However, showing only the most popular add-ons can reinforce a polarized ecosystem where it is nearly impossible for new add-ons to thrive. We aim to do better; we believe that delivering less generic recommendations could be more helpful to our users as these add-ons are better tailored to their experiences.

We also have been busy with analyzing the correlations in the feature spaces leveraged by the individual TAAR modules. While the telemetry space is full of correlations and latent relationships, we believe that the reduced space operated on by the four main recommendations modules (excluding locale-recommender) exhibit sufficient statistical independence that a linear ensemble recommendation strategy could be very helpful. Also, taking a peek at older, more established client profiles, we think that the additional and independent information provided by the similarity based recommender can help to refine the already great recommendation list provided by collaborative filtering. Likewise, information regarding now disabled legacy extensions can be very useful in provided recommended substitutes. The next version of TAAR will utilize a linear stacked ensemble learner in lieu of the current recommendation manager module.

Additional improvements currently in development and planned for TAAR are as follows:

  • better integration of AMO information sources in generating recommendations, including ratings, download rates, uninstall rates, add-on metadata;
  • the combination of individual recommendation modules in an ensemble;
  • new parallelized execution of the individual TAAR modules;
  • back-end optimizations to reduce end-to-end latency of the TAAR web service.

Look forward to the follow-up studies scheduled to evaluate our latest improvements to the TAAR library in early 2018.

And as always… thanks to all the Firefox users out there for continuing to take part to SHIELD studies, helping us improve Firefox every day. Th single Firefox user who continues to trust us with their data by enabling extended telemetry. And to all the Firefox Pioneers who’s contribution of extended data collection allows us to do the best job we can at exploring and refining Firefox features!

Two Days, or How Long Until the Data is In

Two days.

It doesn’t seem like long, but that is how long you need to wait before looking at a day’s Firefox data and being sure than 95% of it has been received.

There are some caveats, of course. This only applies to current versions of Firefox (55 and later). This will very occasionally be wrong (like, say, immediately after Labour Day when people finally get around to waking up their computers that have been sleeping for quite some time). And if you have a special case (like trying to count nearly everything instead of just 95% of it) you might want to wait a bit longer.

But for most cases: Two Days.

As part of my 2017 Q3 Deliverables I looked into how long it takes clients to send their anonymous usage statistics to us using Telemetry. This was a culmination of earlier ponderings on client delay, previous work in establishing Telemetry client health, and an eighteen-month (or more!) push to actually look at our data from a data perspective (meta-data).

This led to a meeting in San Francisco where :mreid, :kparlante, :frank, :gfritzsche, and I settled upon a list of metrics that we ought to measure to determine how healthy our Telemetry system is.

Number one on that list: latency.

It turns out there’s a delay between a user doing something (opening a tab, for instance) and them sending that information to us. This is client delay and is broken into two smaller pieces: recording delay (how long from when the user does something until when we’ve put it in a ping for transport), and submission delay (how long it takes that ready-for-transport ping to get to Mozilla).

If you want to know how many tabs were opened on Tuesday, September the 5th, 2017, you couldn’t tell on the day itself. All the tabs users open late at night won’t even be in pings, and anyone who puts their computer to sleep won’t send their pings until they wake their computer in the morning of the 6th.

This is where “Two Days” comes in: On Thursday the 7th you can be reasonably sure that we have received 95% of all pings containing data from the 5th. In fact, by the 7th, you should even have that data in some scheduled datasets like main_summary.

How do we know this? We measured it:

Screenshot-2017-9-12 Client "main" Ping Delay for Latest Version(1).png(Remember what I said about Labour Day? That’s the exceptional case on beta 56)

Most data, most days, comes in within a single day. Add a day to get it into your favourite dataset, and there you have it: Two Days.

Why is this such a big deal? Currently the only information circulating in Mozilla about how long you need to wait for data is received wisdom from a pre-Firefox-55 (pre-pingsender) world. Some teams wait up to ten full days (!!) before trusting that the data they see is complete enough to make decisions about.

This slows Mozilla down. If we are making decisions on data, our data needs to be fast and reliably so.

It just so happens that, since Firefox 55, it has been.

Now comes the hard part: communicating that it has changed and changing those long-held rules of thumb and idées fixes to adhere to our new, speedy reality.

Which brings us to this blog post. Consider this your notice that we have looked into the latency of Telemetry Data and is looks pretty darn quick these days. If you want to know about what happened on a particular day, you don’t need to wait for ten days any more.

Just Two Days. Then you can have your answers.

:chutten

(Much thanks to :gsvelto and :Dexter’s work on pingsender and using it for shutdown pings, :Dexter’s analyses on ping delay that first showed these amazing improvements, and everyone in the data teams for keeping the data flowing while I poked at SQL and rearranged words in documents.)

(This is a cross-post from chuttenblog. I have quite a few posts on there that you might like, including a series on Windows XP in Firefox, a bunch of Satisfying Graphs, and Reasons Why Data Science Is Hard)

Recording new Telemetry from add-ons

One of the successes for Firefox Telemetry has been the introduction of standardized data types; histograms and scalars.

They are well defined and allow teams to autonomously add new instrumentation. As they are listed in machine-readable files, our data pipeline can support them automatically and new probes just start showing up in different tools. A definition like this enables views like this:

The distribution view for the max_concurrent_tabs scalar on the TMO dashboard.

The distribution view for the max_concurrent_tabs scalar on the TMO dashboard.

This works great when shipping probes in the Firefox core code, going through our normal release and testing channels, which takes a few weeks.

Going faster

However, often we want to ship code faster using add-ons: this may mean running experiments through Test Pilot and SHIELD or deploying Firefox features through system add-ons.

When adding new instrumentation in add-ons, there are two options:

  • Instrumenting the code in Firefox core code, then waiting a few weeks until it is in release.
  • Implementing a custom ping and submitting it through Telemetry, requiring additional client and pipeline work.

Neither are satisfactory; there is significant manual effort for running simple experiments and adding features.

Filling the gap

This is one of the main pain-points coming up for adding new data collection, so over the last months we were planning how to solve this.

As the scope of an end-to-end solution is rather large, we are currently focused on getting the support built into Firefox first. This can enable some use-cases right away. We can then later add better and automated integration in our data pipeline and tooling.

The basic idea is to use the existing Telemetry APIs and seamlessly allow them to record data from new probes as well. To enable this, we will extend the API with registration of new probes from add-ons at runtime.

The recorded data will be submitted with the main ping, but in a separate bucket to tell them apart.

What we have now

We now support add-on registration of events from Firefox 56 on. We expect event recording to mostly be used with experiments, so it made sense to start here.

With this new addition, events can be registered at runtime by Mozilla add-ons instead of using a registry file like Events.yaml.

When starting, add-ons call nsITelemetry.registerEvents() with information on the events they want to record:

Services.telemetry.registerEvents(“myAddon.ui”, {
  “click”: {
    methods: [“click”],
    objects: [“redButton”, “blueButton”],
  }
});

Now, events can be recorded using the normal Telemetry API:

Services.telemetry.recordEvent(“myAddon.ui”, “click”,
                               “redButton”);

This event will be submitted with the next main ping in the “dynamic” process section. We can inspect them through about:telemetry:

The event view in about:telemetry, showing that an event ["myAddon.ui", "click", "redButton"] was successfully recorded with a timestamp.

The event view in about:telemetry.

On the pipeline side, the events are available in the events table in Redash. Custom analysis can access them in the main pings under payload/processes/dynamic/events.

The larger plan

As mentioned, this is the first step of a larger project that consists of multiple high-level pieces. Not all of them are feasible in the short-term, so we intend to work towards them iteratively.

The main driving goals here are:

  1. Make it easy to submit new Telemetry probes from Mozilla add-ons.
  2. New Telemetry probes from add-ons are easily accessible, with minimal manual work.
  3. Uphold our standards for data quality and data review.
  4. Add-on probes should be discoverable from one central place.

This larger project then breaks down into roughly these main pieces:

Phase 1: Client work.

This is currently happening in Q3 & Q4 2017. We are focusing on adding & extending Firefox Telemetry APIs to register & record new probes.

Events are supported in Firefox 56, scalars will follow in 57 or 58, then histograms on a later train. The add-on probe data is sent out with the main ping.

Phase 2: Add-on tooling work.

To enable pipeline automation and data documentation, we want to define a variant of the standard registry formats (like Scalars.yaml). By providing utilities we can make it easier for add-on authors to integrate them.

Phase 3: Pipeline work.

We want to pull the probe registry information from add-ons together in one place, then make it available publically. This will enable automation of data jobs, data discovery and other use-cases. From there we can work on integrating this data into our main datasets and tools.

The later phases are not set in stone yet, so please reach out if you see gaps or overlap with other projects.

Questions?

As always, if you want to reach out or have questions:

Firefox data platform & tools update, Q2 2017

The data platform and tools teams are working on our core Telemetry system, the data pipeline, providing core datasets and maintaining some central data viewing tools.

To make new work more visible, we provide quarterly updates.

What’s new in the last few months?

Beta “main” ping submission delay analysis by :chutten, showing a clear and significant downwards trend..

Beta “main” ping submission delay analysis by :chutten.

A lot of work in the last months was on reducing latency, supporting experimentation and providing a more reliable experience of the data platform.

On the data collection side, we have significantly improved reporting latency from Firefox 55, with preliminary results from Beta showing we receive 95% of the “main” ping within 8 hours (compared to previously over 90 hours). Curious for more detail? #1 and #2 should have you covered.

We also added a “new-profile” ping, which gives a clear and timely signal for new clients.

There is a new API to record active experiments in Firefox. This allows annotating experiments or interesting populations in a standard way.

The record_in_processes field is now required for all histograms. This removes ambiguity about which process they are recorded in.

The data documentation moved to a new home: docs.telemetry.mozilla.org. Are there gaps in the documentation you want to see filled? Let us know by filing a bug.

For datasets, we added telemetry_new_profile_parquet, which makes the data from the “new-profile” ping available.

Additionally, the main_summary dataset now includes all scalars and uses a whitelist for histograms, making it easy to add them. Important fields like active_ticks and Quantum release criteria were also added and backfilled.

For custom analysis on ATMO, cluster lifetimes can now be extended self-serve in the UI. The stability of scheduled job stability also saw major improvements.

There were first steps towards supporting Zeppelin notebooks better; they can now be rendered as Markdown in Python.

The data tools work is focused on making our data available in a more accessible way. Here, our main tool Redash saw multiple improvements.

Large queries should no longer show the slow script dialog and scheduled queries can now have an expiration date. Finally, a new Athena data source was introduced, which contains a subset of our Telemetry-based derived datasets. This brings huge performance and stability improvements over Presto.

What is up next?

For the next few months, interesting projects in the pipeline include:

  • The experiments viewer & pipeline, which will make it much easier to run pref-flipping experiments in Firefox.
  • Recording new probes from add-ons into the main ping (events, scalars, histograms).
  • We are working on defining and monitoring basic guarantees for the Telemetry client data (like reporting latency ranges).
  • A re-design of about:telemetry is currently on-going, with more improvements on the way.
  • A first version of Mission Control will be available, a tool for more real-time release monitoring.
  • Analyzing the results of the Telemetry survey, (thanks everyone!) to inform our planning.
  • Extending the main_summary dataset to include all histograms.
  • Adding a pre-release longitudinal dataset, which will include all measures on those channels.
  • Looking into additional options to decrease the Firefox data reporting latency.

How to contact us.

Please reach out to us with any questions or concerns.

Measuring Search in Firefox

Today we are launching a new search data collection initiative in Firefox. This data will allow us to greatly improve the Firefox search experience while still respecting user privacy.

Search is both a fundamental method for navigating the Web and how Mozilla makes much of its revenue. Our research shows users have complicated search workflows. We know from internal user research studies that users often start a search from places like the Awesome Bar or search bar and then continue to refine their search on the search engine results page. We call these additional searches follow-on searches.

Firefox telemetry already includes a count of the searches users perform in all Firefox search bars. Firefox does not yet count follow-on searches. This is a real challenge for Mozilla, because we don’t understand how well the Firefox search experience works for our users.

A new experiment launching today will measure follow-on searches. When you search with one of the search engines that we include in Firefox, we will increment a counter for each follow-on search. Our telemetry system will count follow-on searches the same way we already count direct searches from our search bars. We won’t collect search queries (the words you type into the search box) nor any other Web browsing activity.

We will roll out the new experiment to a random sample of 10% of Firefox release users. If successful, we will extend these follow-on search measurements to our entire release population as a part of our normal telemetry system.

We seek these new measurements to gain missing insight into a crucial browser interaction. These new measurements are consistent with our data collection principles. Data helps us decide where to apply our limited resources to improve Firefox, while also safeguarding user privacy.  Mozilla will continue to provide public documentation and user controls for all telemetry collected within Firefox. With better insight into search behavior, we can improve Firefox and continue to sustain Mozilla’s mission.

Javaun Moradi,  Sr. Product Manager, Firefox Search