Snappy #38: Responsiveness Fixes Galore

End of summer is a tough time to make progress because a lot of people are on vacation. Surprisingly, Firefox got some good fixes in since the last update.

Less Slow Startups

Bug 726125: should get rid of a lot of super-slow startups. Due to an abstraction accident we ended up validating jars more eagerly than expected. Firefox would go on the net (on the main thread) to check the certificate every time a signed jar was opened. There are over 500 signed extensions on AMO with over 14million active users. See the following for background on the (now dead) feature that caused our jar code to go nuts: signed scripts and note on removal of signed script support. Thanks for Nicholas Chaim and Vladan Djeric for fixing this.

Less Proxy Lag (WIP)

Bug 769764. We have received a lot of strange complaints about Firefox network performance that we could never reproduce. Turned out this was because none of us used proxies. Patrick McManus discovered a lot of synchronous proxy and DNS code in our network stack.

Fix for this should also improve performance for people without proxies since proxy-autodetection code was also doing main thread IO. As a result all of us replacing sync APIs with async ones all of the existing proxy-related addons will have to be updated. Patrick is reaching out to addon authors to make sure addons are updated in time for the next release.

Less UI Repaint Lag

Bug 786421: Nightlies got unbearably slow for me recently. Turned out we ended continuously resizing + applying theme + redrawing invisible tooltips on every paint. Thanks for Timothy Nikkel for fixing this. This bug never affected anyone outside of the Nightly/Aurora testers, but it serves as yet another example of how the Gecko Profiler makes it easier than ever to diagnose weird performance problems. The single biggest contribution anyone can do at the moment is to provide instructions of how to reproduce lag with accompanying profiler traces.

Less Gradient Lag

Bug 761393: Paul Adenot implemented a gradient cache. This was landed as a Telemetry experiment so we can determine what the optimal cache retention strategy is. We’ll be watching the relationship between GRADIENT_DURATION and GRADIENT_RETENTION_TIME in the coming weeks.
Currently rendering gradients cause stalls in the GPU pipeline. In previous experiments we found out that most of the tab-switch rendering time in hardware-accelerated Firefox is spent rendering gradients :(. Gradients are hard to notice for casual users, but they are heavily used in our tab strip and on Google web properties.


I may not have a chance to post the next snappy update as I’ll be hopping on the plane to Warsaw right after our meeting. If you are attending MozCamp come to our ‘All About Performance’ session. Our goal for the talk is to significantly expand the pool of people who can diagnose Firefox (and web) performance problems.


  1. From the blog it is unclear to me which version of Firefox will have these fixes. Can you elaborate on that?

  2. Hello, Dan Neely suggested I cross post a portion of a comment I wrote at the Memshrink blog. Here is the relevant section of the original comment.

    1. Observation: playing flash videos (e.g. Youtube, both directly and embedded in other pages) leads to higher memory usage. Several updates of the flash player didn’t fix this. FF becomes very slow when playing videos, videos skip, even the stream buffering of videos is slow, etc.

    2. Observation: video playback is not as bad if I let the video download entirely first. Disk activity quiets down somewhat, and then performance is less bad. These led to:

    3. Observation: video playback performance is worse when FF’s web data cache is full. It’s configured to the default, 1GB.

    So this led to me to suspect [that] FF is trying to clear up enough space in the data cache and it’s running into trouble.

    From past experience with the data cache I know deleting it completely could require over 1 hour of disk i/o. In part this was because files would be deleted in random order (from the point of view of the file system). Eventually this was fixed so files were deleted in order plus deletion became some sort of offline process where the whole cache is first moved and then deleted in order, and this sped up managing the cache. However, it is still the case that the file cache tends to hold a huge number of mostly small files.

    [While looking at the cached files, I had observed before many tiny files (~500 bytes) that were mostly the same by eye inspection, and I suspected perhaps they were wholly duplicate files. The last time the cache filled up to capacity, I ran fdupes before deleting it. fdupes produced output showing hundreds of duplicated files.] So, question 1a: could it be possible to e.g. MD5 all files going into the file cache, and only add new files to the file cache when the MD5 hash is different (or, if the MD5 hash is equal, then a subsequent hash breaks the tie or the file contents are compared)? The idea behind this is “no duplicate files in the cache” -> “better cache utilization” -> “less disk i/o” -> “more responsive FF”. Question 1b: when needing to make up enough space in the file cache, is it possible that the operation could result in time wasting when needing to free up (say) 30mb and the deletion proceeds by freeing ~500 bytes at a time? If so, what could be done to address the problem? I ask because, in general terms, heavy disk i/o might be confused with swap file thrashing and the implication that FF is using too much memory to begin with etc.

    I’d be happy to spend some time to help figuring this stuff out.