A really good ACM article about static analysis from Coverity’s perspective has been making rounds in Mozilla. What struck me most was the following paragraph:
At the most basic level, errors found with little analysis are often better than errors found with deeper tricks. A good error is probable, a true error, easy to diagnose; best is difficult to misdiagnose. As the number of analysis steps increases, so, too, does the chance of analysis mistake, user confusion, or the perceived improbability of event sequence. No analysis equals no mistake.
My personal view has been that “dumb” analyses are the most effective ones in terms of mistakes spotted vs time wasted writing/landing the analysis. It is interesting to see that sophisticated analyses are difficult to deploy even for Coverity.
In other news, LCA 2010 was my favourite conference so far. I met a number of awesome developers there. Mozilla’s static analysis work finally got mentioned in LWN!