The new face of Firefox

Madhava Enros

17

We shipped it!

Firefox 29 is winging its way to hundreds of millions of people as we speak, and it brings with it the set of changes we’ve been calling Australis: an interface streamlining, evolution, or complete overhaul, depending on who you talk to. It represents years of concepting, designing, building, testing, rebuilding and shipping. We’re extremely pleased and proud that it’s ready for you to use.

This is the most beautiful and detailed version of Firefox yet. Along with the immediately apparent visual improvements, I’m also really proud of the care and craft that’s gone into how the browser feels, especially the way controls respond as you use them.

Every bit of the new interface has been finely-tuned to be fast and simple. The forward button is only shown when it is relevant; the downloads button shows progress only when there is progress to show; the bookmarks button makes it clear where your bookmarks can be found.

I think I’m most proud of how simple and engaging it is now to spend time customizing the browser. The process of configuring a browser could easily have felt like a chore, but instead I think we’ve built it in a way that people will explore, enjoy, and revisit until they’ve really made Firefox their own.

This is also not a finish line so much as a new firmer foundation. Firefox’s new design provides a better more extensible interface model that will accommodate future features and additions. It’s a simpler presentation of add-ons as equals to built-in browser features. And it finally brings us a familiar look and feel across all our platforms so that Firefox feels like Firefox everywhere.

For those of you interested in how we got here, the Australis redesign was shaped and focused by our Firefox Design Values. While they all come into play, certain of them were particularly relevant here.

The You help make it and Balances power and simplicity principles flow into customization as a top-level priority. That balance is different for everyone, which is why we feel it is important to give you the choice to make Firefox yours.

The Finely crafted and High user performance principles come in the detail and care we put into the browser’s look and the efficiency with which you can use it.

The animation in interactions like bookmarking is an expression of our Exuberance value. There’s a liveliness to the way the controls respond and explain what’s happening, as in way that the browser resizes when you enter the new customization mode.

There’s a lot to say about the particulars of the design, but others are already doing this extremely well. Here’s a list of other design posts, which I’ll keep up to date.

Further design writing on Australis and Firefox 29

A huge heartfelt thank you to all the Mozillians who helped to make Australis real — we’ve been looking forward to this day for a long time. And thank you, everyone, for using the new Firefox!

The Experience of Mind Reading

Tony Santos

Building an experience around recommender systems

When you think about shopping on the web, or watching movies on the web, or listening to music on the web, or doing pretty much anything on the web, at some point most of us expect that “the web” will make some suggestions on where it is we’re going, what we’re watching, listening to, or buying. These suggestions can take the form of ads that follow us around for days at a time (creepy) or quiet little “people who looked at this also looked at these things” sections of a web page. How computers figure out what we want and when we want it is a black box of magical math and statistics for most of us, myself included until just recently.

Let’s read peoples minds!

A few months ago the Firefox Marketplace team decided to start exploring what it would mean to add a recommender system to the Firefox Marketplace. System-generated recommendations are something we’ve talked about before, and this time we had the help from a fantastic team at Telefonica Labs. They built a functioning prototype of a recommender system for the Marketplace that performed “better than popular,” the gold standard against which all recommender systems are compared. We had the engine, we had the statistical analysis, but I was curious about how people would perceive the system, do people actually think it’s better than popular recommendations after using it?

A peak into the magical black box of recommendations

Before I even attempt to (poorly) describe any of the magic that allows computers able to recommend things to people, I want to stress that I am a complete novice in this field. There is a fantastic slide deck you can view put together by Alexandros Karatzoglou, one of the super-smart researchers we are working with at Telefonica Labs. It is a much better intro to recommender systems than what I’m about to write.

recommendationExample-1

Basically, if we know how enough people feel about a couple of things, and those feelings overlap, we can make pretty good guesses about their feelings about things; as long as some of them have seen these other things that the others have not.

To be clear: we are NOT collecting any data on any Marketplace users at this time. Future recommendations in the Marketplace will be an opt-in, and we will be very clear on what data we would use and how.

Setting up the test

We set up a front-end for the recommendations engine by modifying the existing Marketplace UI slightly to repurpose it for the test. The problems we identified and wanted to explore were:

  • How do users perceive the “cold start?”
  • How do “real unbiased” preferences perform in the engine?
  • How do weighted preferences perform in the engine?
  • What are user’s reactions to “Recommended Apps?”

We built a prototype that displayed 21 popular apps. These were presented as “Recommended apps” and we asked a series of questions to rate a user’s perceptions of these apps as her initial recommendations. This scenario is the “cold start,” where we don’t have any information to make recommendations with. We then asked the participant to press the next button. She was presented with a list of apps and asked to select 10 apps that looked interesting to her. We used these interests to generate a “new set of recommendations,” this time, actual recommendations. The final step is to present the participant with the same set of questions we asked in the beginning, to see if the real recommendations were perceived as better or worse than the popular apps list.

A note on “real unbiased” preferences vs “weighted preferences” that I mentioned above: I made this distinction early in planning the test unaware of just how much it would affect the results because I had no idea how the recommendations engine actually worked. You may be asking yourself “wouldn’t this be biased and unbiased preferences?” and maybe even “aren’t all preferences inherently biased?” These are both good questions, so I want to define the terms as I used them in planning the test.

When I say “real, unbiased preferences” what I mean is letting people choose apps that are interesting to them without the influence of popularity. This may seem silly if you are familiar with the Firefox Marketplace because all of our listings default to a sort by popularity. The intention behind it was to simulate someone with eclectic tastes or who is finding things that are less popular, but are a strong representation of that user’s interests nonetheless. To simulate this effect, we took the whole Marketplace catalog, sorted it randomly and presented it to participants. This gave an equal chance of users finding a popular app or a relatively unknown app when they were choosing their preferences.

When I say weighted preferences, I’m simply referring to preferences participants shared with us from a list sorted by popularity. This is an important distinction because I wanted to be able to account for real preferences, which could be very unpopular but very informative, vs satisficing behavior, which I initially believed to be potentially less informative.

It turns out both of these decisions have very real consequences on how the engine works, and those showed up in our findings.

What did we learn about mind reading?

Satisficing is ok

On our first day of testing, we used the “real unbiased preference” method of collecting preferences from our participants. I point this out because it turns out that the engine doesn’t really know how to deal with unpopular things, and the recommendations that came back were…off. Linus, another one of the researchers, explained it to me as “When the engine doesn’t know anything about an app, because no one has downloaded it or very few people have, all it can do is give back random results.” And indeed that’s what we were seeing. This is an important point and I bring it up to highlight something I learned while testing these kinds of systems with real people. My attempt to control against satisficing was totally incompatible with the recommendations engine, because satisficing is kinda how recommendations work. To account for this, we switched the list of apps we provided to a list of 250 popular apps, cutting off the top 100, sorted randomly. The recommendations we started seeing in the additional tests started making a little more sense, statistically and preferentially, than the randomness we saw in the first batch.

Catalog Counts

Overall, we found that the depth of the catalog is a major factor in people’s opinion of recommendations being “good.” As I went back and looked at the apps recommended, based on the preferences we collected, a lot of the recommendations were actually pretty good. The things recommended were similar types of apps in a lot of cases, which I felt was what the recommender was supposed to be doing. The overall goal of the recommendation s is to offer you something similar to the other things you like because you’ll probably like the recommended thing based on that similarity. This didn’t seem to overcome the vast majority of “unknown apps” in the results though, and it seems that if participants didn’t recognize at least a few of the apps in the list, they talked about how they didn’t think the recommendations were very good. There appears to be a level of familiarity expected from recommendations on the Internet. This leads me to believe that some kind of justification, (e.g. “this recommendation is based on”) is important when making recommendations of less popular items. This explaining why a recommendation was made is extra important for less “tech-savvy” users, who seem to be far less patient with having to do a lot of exploration. We’ve seen this disinterest in other studies and the trend held true in this study, users with reported lower levels of computer proficiency clicked into app details less often to explore the catalog.

Labels are almost as good as the real thing

Of the 12 total participants who successfully completed the study’s tasks, seven of them took the initial list of popular apps as “recommendations” simply because that’s what the label said. Five of those seven mentioned an assumption that we knew something about them based on their browsing history. Three of those five used that assumption to explain why the “recommendations” they saw on first load where good. The other two used it as justification for a complaint about the recommendations, that they should be better than they were. This is interesting and important because it turns out just by calling something “recommendations” it can make people assume you’ve been profiling them somehow.

What’s next for the Firefox Marketplace

Overall, this study was enlightening and we’re now exploring how we can add recommendations into the Marketplace in a way that is useful, fun, and not creepy. We aren’t sure what that means yet, but expect to see some more writing (and pictures) around this topic soon. If you’re interested in contributing to this Marketplace project, reach out and let’s talk. There is still research and design to be done.

I’d like to thank the awesome team at Telefonica Labs, Alexandros Karatzoglou, Linus Baltrunas, and João Baptista for hosting us and helping us make this even a possibility. I’d also like to thank my awesome coworkers David Bialer and Rob Hudson for their help and future work getting us to this point on this feature.

Firefox Design Values

Madhava Enros

What are the attributes of Firefoxiness?

This question has been with us on the Firefox UX design team for years, but it really came to the forefront when we started designing Firefox for Android. The goal was not just to create a great mobile browser but to create mobile Firefox. We realized, as we looked for guidance, that the only map we had was the terrain itself — Firefoxiness meant “like desktop Firefox.”

We worked and intuited our way through that design process and found a balance, but it made the need for a crisp articulation of Firefoxiness extremely clear. And it’s coming up more and more; as we design new features for our existing products, take Firefox to new platforms, and create new products, we will want to be sure: is what we’re making a clear expression of what it means to be Firefox? What will make it more Firefoxy? What will we not do because it’s not true to Firefox?

To that end, the Firefox UX designers and researchers convened and did some soul-searching, post-it-ing, and clustering. I took the results and teased out a set of Firefox Design Values.

Here are the short versions of what they mean:

Takes care of you
Firefox champions you – your security, privacy, and the quality of your online life.

You help make it
Firefox is only a perfect fit once it’s in your hands and can make it your own.

Plays well with others
Firefox never locks you into particular services or providers; instead, it gives you choice and independence, along with great suggestions.

Exuberant
Firefox is human, fun, whimsical, and joyful.

Global
There’s a real diversity of use and need across the globe, and Firefox cares about these differences.

Finely Crafted
Firefox is made by people who care about the details.

Balances power and simplicity
Firefox will never overwhelm you with interface, but it will also give you the satisfaction of using the web with mastery.

Makes sense of the web
Firefox focuses on real human goals and activities and gives you the tools you need to accomplish your ends.

High user-performance
Firefox is viscerally responsive; highly tuned and eager to browse.

We have the full descriptions and further detail here, as well as a PDF of them in booklet form.

So far, the values have been very well received — Mozillians across the community have let us know that the set we’ve produced makes sense to them and is helpful in framing discussions. Best of all, they recognize their own beliefs about Firefox in the set we’ve produced. UXers are using the values to explain the whys and hows of the designs we’re pursuing; in fact, the set of interface changes in Australis were shaped and focused by them.

In the end, design values don’t necessarily tell us what to do — that comes from user needs, mission, and market strategy — but they remind us of how we should do it. As we do more, we should make sure that it’s true to Firefox.

UXIM 2014 Recap

Yuan Wang

1

Two weeks ago, I attended User Experience Immersion Mobile(UXIM) for the first time in Denver, CO. This is a conference organized by UIE (User Interface Engineering). Unlike other popular UX conferences, for example IxDA and UX week, UXIM has a strong and practical focus on tools and techniques to create great mobile experience, which I found to be quite useful and relevant to my daily practice.

My colleague Ian Barlow has put together a well summarized note of a majority of the talks. In this post, I will focus on a few that he has not covered.

 

Cyd Harrell, Conducting Usability Research for Mobile Apps

In this full-day workshop, Cyd Harrell introduced many latest mobile research techniques for intreviewing, gathering data, and involving the entire team.

Mobile Research Tools

And more importantly: Remember be graceful when all your technology breaks.

Designing a mobile-specific research plan

  • Come up with a script that is flexible for customization, for example “Please look for a gift you would like to buy for one of your family member”
  • New recruiting channel: Use Twitter and edit your profile to mention “UX Researcher” and “Current study: ______”
  • Use SMS to send out a mobile surveys. Keep it short(5 questions). Ask a pithy open-ended question.
  • Filling in the gaps: Test on mob4hire.com, mobtest, usertesting.com

Collecting user data with mobile devices

To gather user data, usually there are two approaches: use a lab device, or use the user’s own device. Using the user’s mobile device can provide a better context and make the user feel comfortable. In comparison, use a lab device will lose the personal context, but it will reduce installation time and enable more control for the researcher.

Tips for working with users’ own mobile devices:

  • Have a backup iPhone and Android device on hand
  • Make a power strip part of your testing station
  • Have chargers for all the phones you are expecting
  • Instruct participants to install necessary apps in advance, but leave time in case they don’t
  • Adjust the screen brightness before you start each session

Tips for offering lab devices:

  • Have a backup iPhone and Android device on hand
  • Charge devices between every session
  • Remove passwords, lock screens, etc
  • Dry-run every aspect of the test

For physical lab setup, Cyd mentioned this “Hug your laptop” approach for remote mobile testing. This was first adopted by firstly Mailchimp UX and then Mozilla UX team. I’ve personally participated in the testing session by our former researcher Diane, and it worked extremely smoothly.

Cyd's slide 1

Cyd also mentioned using Sleds to help observe and gather testing data. One example product is Mr. Tappy.

Cyd also mentioned a great community movement Open Device Lab, which is created to establish shared community pools of digital devices for testing products. Besides, she also mentioned some in-house practice on building a device lab, for example Etsy’s case study.

Conducting user interviews on-the-go

Through an interactive activity, Cyd demonstrated reaching to your participants via mobile can help you collect in-context and real-time results. She mentioned a great mobile case study: Trackyourhappiness.org. The study is completely conducted on the go via SMS and mobile survey forms. Besides, Blurb, dscout, and Usabilia are also effective tools to gain a contextual understanding without shadowing the users in person. This contextual technique could be particularly helpful in settings like public transit, food habits, shopping, exercise, etc.

As a designer who has experiences on usability study and research on mobile, I found Cyd’s workshop extremely informative and practical. I’m looking forward to exploring these tools and applying the techniques to my next research practice.

 

Jason Grigsby, Adapting to Different Types of Input

Nowadays people have touch screens, cameras, voice control, and sensors on their digital devices. How do we design for the explosion of these dynamic inputs?  Jason Grigsby shared some interesting facts about input and his forward-thinking strategies.

Jason Grigsby's slide 1

The Web never had a fixed canvas. Knowing the screen sizes doesn’t matter as much, since lines are blurred in between phones, tablets, and laptops. Resolution doesn’t define the optimal experience.

Jason Grigsby's slide 2

How to handle this challenge? Jason showed us several futuristic input technology and products that could potentially change the game.

Circling back, Jason dived deep into explaining what this input challenge means for the Web:

  • For TV remote controls, input patterns are difficult to detect.
  • There is still a gap in between touch input and desktop input(mouse and keyboard).
  • For many people in the emerging countries, mobile is their first smart device, not desktop.
  • People use laptops with mouse, keyboard, touch all together. Switching modes won’t solve the problem

Jason Grigsby slide 3

What if the product has lots of legacy and the users won’t let go? Some example Jason listed below demonstrated good product strategies in giving the users a choice.

The key benefit of giving this approach is: You are designing for user need not for a specific form factor or input.

Having been working on design UI for hybrid and convertible devices, I found Jason’s viewpoints about dynamic inputs quite valuable and relevant. The web is a continuous canvas, adaptive interface should not just belong to website. It’s time to break the boundaries in between desktop, mobile, and web, and rethink about the experience as a union.

 

As I mentioned, these two above are the talks that my colleague didn’t cover. I’ve also enjoyed brilliant talks by Ben Callahan, Brad Frost, and learned a lot from a hands-on jQuery workshop by Nate Schutta.

Overall UXIM 2014 was a delightful experience. It was eye-opening to share thoughts about mobile with people who do design in various industries, such as healthcare, energy, retail, etc.  Thanks Jared Spool and the rest of the team putting together a wonderful conference.