My name is Taras and I like to wrestle useful information out of tools that do not think to offer it. I firmly believe that writing/maintaining code is more difficult than it should be. I think the current organization of compilers is partially responsible for it.
Related Prior Work
Some groups at Mozilla have realized that inspection and refactoring of code are tasks that will often consume as many people as can be thrown at it so there has to be a better way. That better way is through tools: get computers to do things that are difficult for humans. We do that a lot for other kinds of tasks, but not so much for computers. For example, when was the last something you googled something using telnet? Yet our tools for presenting and analyzing code are about as sophisticated as telnet. For some reason it is easier to find some random piece of information on the web than it is to find what code implements a function that is being called from my code.
As the original author, I think Dehydra, Treehydra on top of the GCC plugin API give us a pretty reasonable way to extract useful information out of our C++ code. I feel lost and confused without Dave’s DXR, the JS team routinely breaks (and fixes) invariants in the JS engine and we’ve been able to wipe out certain classes of bugs (ie code patterns that cause them) mozilla-wide. I’m looking forward to reducing the footprint of Mozilla by deleting more dead code.
I think it’s clear that the common trend in all these tasks is to turn an opaque text blob into something that can be easily navigated programatically. Wouldn’t it suck to not to be able to walk the DOM for HTML? Why is it ok to accept that handicap for the essense of all programs?
How Should This Be Solved?