Browsing Sessions III: Do Users Overestimate How Long They Browse?

Hamilton Ulmer

2

In our last post, we found that the number of installed extensions was a good discriminant of heavier users. In this short follow-up, we’ll delve into the survey data associated with the Beta Interface study.  Here is a snapshot of some of the research we’ve been conducting.

Users overshoot their estimated browsing time

The graph above demonstrates that users tend to simply overestimate how long they use Firefox. Those that typically use the browser less have a more accurate assessment of how long they are browsing. But for users who state a longer browsing time per day, the actual browser usage is lower than their own estimate.

First, a note about the methodology behind this graphic. We estimate the average daily browsing time by aggregating the session lengths of Test Pilot users over the course of the study. Previously we have defined a browser session as a continuous period of user activity in the browser, where successive events are separated by no more than 30 minutes. We subset on the users that state they only use Firefox, to avoid the problem of a different primary browser. 

We thought of a few possible explanations as to why, for heavier users, the estimated time is lower than the stated time. Those users might, for instance, be online and using their computers quite a bit during the day, but have integrated their online workflow with their offline ones. Software engineers are a good example of this – we might expect a programmer to be working on a computer all day, leaving the browser open, and using it every once in a while.  So there may be the perception of constant browser usage.  This certainly rings true from the experience of the Metrics team – we’re on our computers almost all day, with Firefox open, despite working.  This is, however, only speculative at this point, since we don’t have data about when users are on their computers.

There are still some obvious methodological issues with this approach: a user might, for instance, use Firefox on a work computer (with test pilot installed), and a different one for home use, which could account for the difference. As such, we hope to include a survey question asking “How much time a day do you spend on this computer?” in the next version of the study.  At that point, we can update this research.

2 responses

Post a comment

  1. Gordon P. Hemsley wrote on ::

    You do make a valid point about people (especially developers) equating “Firefox is open” with “using Firefox”, even though they’re not actually using it all the time.

    But does the data account for a bias in the other direction? You note that it doesn’t account for using multiple computers in different locations that may not all be set up with same (i.e. with Test Pilot installed). But does it account for when the browser crashes or hangs and needs to be restarted? What about simply using multiple profiles on the same computer? (Or does the latter not really matter for the metrics anyway?)

  2. Nathaniel Tucker wrote on ::

    More likely explanation: Users are not on one computer all day. They are reporting usage across all machines, which would not be reflected by this data.

Post Your Comment