Author Archives: tdowner

Firefox Sentiment Reports

Hello all from the User Advocacy team! We have a special new tool to discuss today that will help us gauge the impact of each Firefox release on our amazing user base. We call this the User Sentiment Report (USR). While we’ve been working on this internally for the past couple of releases, fine tuning and tweaking the report, we are finally ready to make this report publicly available for both Desktop and Android Firefox 21!

What is a Sentiment Report?

The User Advocacy team spends a great deal of our time reading and tracking the feedback from our hundreds of millions of users via several channels (SUMO, Input, etc.). We use this feedback to find pain points, problems, and pleasures that users have with our products and then report on them to make sure our products get better and better with each release. One challenge that we have is getting the big picture of all feedback for a release, and gauging the general feeling of our users, if a release had a lot of problems for users, or if it was a very smooth one. So, the Sentiment Report was created. This report allows us to see specific releases, the general feeling of users for that release (based on the number of SUMO reports, negative pieces of input, etc.) and easily compare it to previous releases (allowing us to determine what a “Normal” release is). This report is generated at the end of every release cycle using feedback gathered since release day. The insights from this report have not been possible ever before, so we are excited to have expanded this report to Firefox for Android.

Firefox for Android Sentiment Report

While the Firefox for Desktop Sentiment Report has been ongoing for a couple of releases as we fine tuned it, we haven’t had a Firefox for Android report until this cycle. Using what we have learned from Desktop, here is the first ever Mozilla User Sentiment Report for Android!

There are a few things I’d like to highlight about the report. First is the trending topics. On input.mozilla.org we receive thousands of pieces of feedback for each release of Firefox. It is impossible to go through and read all this feedback manually. So with the help of the metrics team, we created an auto-tagger which, using a training set made by hand, will automatically tag all feedback that comes through input into roughly 20 different buckets (Crashes, websites issues, Flash issues, etc.). This allows us to watch for spikes and drops in various types of feedback and chase down anomalies when they happen. In the sentiment report, you can see the current top ten categories compared with the last 5 releases (Blue points are downward trends, Red are upwards). The numbers you see are negative input per 10 million ADI (Active Daily Installs).

Cost of Support (COS). This is an important metric. In it, we track clicks to the Help button in Firefox for Android per thousand ADI’s. The reason we watch this number is if we see a spike in Help button clicks along with a corresponding decrease in ADI’s, it is likely that there is a significant problem in that version of Firefox that is causing us to lose users. You can compare the week by week cost of Support and the overall COS for each version. Lower is obviously better (Low clicks to the help button with high ADI’s).

Most Painful Issues for Top Devices. Another awesome thing the auto-tagger lets us do is track what devices are having more issues in what categories, and lets us track these per release. So for example, we can see that the Nexus 7 was having major issues with Crashing in previous releases, but that this has consistently been trending downwards, meaning the crash work we have been doing has been paying off. Conversely we can see that the Asus TF300T had a small spike in complaints around slowness in Firefox 21, so we should begin to look into that for future releases. We can track these numbers for any device we have data on, but obviously we don’t have room to show them all in the report.

Firefox for Desktop Sentiment Report

Of course we have this report for Desktop too. Released at the same time as the Android Sentiment report, you can view the Desktop User Sentiment Report! This report tracks many of the same things that the Android report does, but with some notable exceptions.

Trending Topics. We don’t have an auto-tagger for Desktop (yet), so this category consists for topics that were automatically generated from feedback we received on this release, and then were manually curated to give the most relevant information possible.

Survey Data. We have a tool on Firefox for Desktop called Startup Snippets. Users that use the default about:home page in Firefox can see these little snippets of text under the search bar. At certain times during a release cycle we will deploy a link to a survey via these snippets to a certain sample of our users, asking for their feedback around the latest version of Firefox. We can then use this survey data to generate a star rating for Firefox (1-5 stars, with 5 being the highest).

The rest of the report is fairly self explanatory, Cost of support being the number of users who click the help button in Firefox per thousand ADI, negative input (and positive input), and the number of new support threads each week. Using all these graphs we can watch how a release fares over every week of the cycle and compare it with previous releases.

The Future

By no means are these reports in their final form. Every cycle of Firefox brings in new improvements to the User Sentiment Reports. We hope to add many new features in the future, such as tracking of Google Play data for Android, an auto-tagger for Desktop, etc. If you have feedback or ideas for these reports please feel free to contact Tyler Downer or Matt Grimes.

Of Course…

None of this would have been possible without a ton of hard work from many different people. Special thanks go to Hamilton Ulmer, Ali Almossawi, Annie Elliott, Ibai Garcia, and countless other people for their feedback and ideas.

SUMO Thread analysis: Better Tools

At SUMO we always want to help our contributors help our users. To further this goal, we have begun analyzing SUMO threads, to see how we can help contributors improve their responses, and hopefully help more users. To help this goal, I and some other members of the SUMO team have analyzed a week’s worth of threads to try to find holes in our current process.

The Process

We analyzed all threads created on SUMO from April 1 to April 7. There were 365 total threads for this time period. We then arranged these threads into categories, 41 to be precise. These categories range from “Website Looks / Acts Wrong” to “Firefox Crashes” and everything in between.

Now, out of these categories, 23 of them (or 56%) have less than 5 threads per category, which was too small for analysis. Another was because of the Java blocklisting, which while it may be interesting for another analysis, for this case we just discarded those threads.

Then, we chose those categories that had the most useful information for our purposes. How many threads had responses, how many were useful responses, how many were just general replies (like “Try in Safe Mode”), etc. These threads turned out to be the following:
Firefox Crashes (19 threads)
Website Looks /Acts wrong (41 Threads)
Firefox is Slow (10 Threads)
Problems Caused by Plugins (10 Threads)

The Results

Out of all these threads, it became apparent that over half (50-60%) of threads have just very basic, general troubleshooting answers. These threads were also the ones that had very low (~20%) reply rate from the original poster. It seems from this that users want to reply when a contributor addresses their issue more directly than when there is just a general answer.

The answers that we found were not General Answers or Solid Answers were as follows:
Needs better Troubleshooting (roughly 12% of questions)
Too technical (Roughly 7%)

The rest of the questions that did not fall into these 3 categories were those with actual solid answers. While they may not have had a solution marked, they did have answers that, from how well they were written and how they addressed issues relevant to the original poster, seemed that they would have a high chance of fixing the problem. If we can increase the rate of users coming back to SUMO to update their questions, that will help the number of “Solid Answer” threads go up.

One good thing we found in our analysis was out of the 4 main groups of questions, only one question did not have an answer! This means we are doing really well on making sure we reply to 100% of threads.

Solutions

From the analysis, it seems that if we can help contributors provide more useful answers, we should begin to notice a higher percentage of solved and solid answers. To help with this goal, we have come up with a few different suggestions that we can begin to implement immediately:

Contributor guidelines:
Provide documentation on SUMO for contributors. This can range from how to begin diagnosing different issues (crash IDs, extensions, websites, etc.) to just general help for interacting with different users. We can give common issues, how to reply to them, tools to suggest to users, tools not to suggest, etc.

Contributor Workshops:
Beginning Class: Once every X weeks we have a class to teach people who want to contribute or have recently started contributing the basic ways to respond to threads, troubleshoot, act professionally, etc. These don’t even have to be ran by Mozilla Staff, experience and trusted contributors could be asked to help run these.

Special Guest Class: Developers, SUMO Staff, etc. can come and have a webinar to explain new features in firefox, how they work, what some common issues may be or are, and types of feedback they are looking for. Example, a Firefox dev in charge of the pdf.js feature could have a session about what it is, a basic overview of how it works, some known issues, how to fix them, and asking the community to keep an eye out and give feedback on X Y and Z. Then have a Q & A Time.

Specialty Webinars: Every so many months, or as needed based on feedback, the SUMO Staff gives sessions on diagnosing Hangs, how to read a Crash ID, website troubleshooting, etc.

Help Wanted!

Now, all of these are just ideas, for now. Obviously the sooner we get better tools to the community, the sooner we can improve the service we give to End-Users. We would love to get feedback from the community on ways they think that we can improve the currently available tools. Nobody knows ways we can help the community better than the community, so the more input we can get from you, the better! You can ping me on IRC (:Tyler), send me a message on SUMO (tylerdowner) or leave a comment. I’d love to hear from you and hear all your ideas! Specifically we would love feedback on these areas:
1. How useful is this current analysis, do you want to see more information from it, should we repeat it, if so how often, etc.
2. Do you feel these tools will help the community (you!), do you have suggestions, or even totally new ideas?