The take-away message: you lose more when slow than you gain when fast. Your performance is determined primarily by your slowest operations. This is true for two reasons. First, in software you can easily get such large differences in performance: 10x, 100x, 1000x and more aren’t uncommon. Second, in complex programs like a web browser, overall performance (i.e. what a user feels when browsing day-to-day) is determined by a huge range of different operations, some of which will be relatively fast and some of which will be relatively slow.
Once you realize this, you start to look for the really slow cases. You know, the ones where the browser slows to a crawl and user starts cursing and clicking wildly and holy crap if this happens again I’m switching to another browser. Those are the performance effects that most users care about, not whether their browser is 2x slower on some benchmark. When they say “it’s really fast”, the probably actually mean “it’s never really slow”.
That’s why memory leaks are so bad — because they lead to paging, which utterly destroys performance, probably more than anything else.
It also makes me think that the single most important metric when considering browser performance is page fault counts. Hmm, I think it’s time to look again at Julian Seward’s VM profiler and the Linux perf tools.