Working on the JS shell is great. On my 2.5 year old Linux box it takes maybe 1 minute to build from scratch, rebuilds are almost instantaneous, the regression tests take a few minutes, shell start-up is instantaneous, you rarely have to do try server runs, tools like GDB and Valgrind are easy to run, and landing patches on the TraceMonkey repo is low-stress because breakage doesn’t affect too many people.
In comparison, working on the browser sucks. Builds from scratch take 25 minutes, zero-change rebuilds take 1.5 minutes, single-change rebuilds take 3 or 4 minutes, the linking stage grinds my machine to a halt, cold starts take up to 20 seconds or more (warm starts are much better), the test suites are gargantuan, every change requires a try server run, tools like GDB and Valgrind require jumping though hoops (
--disable-jemalloc, anyone?), and landing patches on mozilla-central is stressful.
Thanks to my recent about:memory work, I’ve had to experience this pain first-hand. It’s awful. Debugging experiments that would take 20 seconds in the shell take 5 minutes in the browser. I avoid ‘hg up’ as much as possible due to the slow rebuilds it usually entails. How do all you non-JS people deal with it? Maybe you just get used to it… but I figure there have to be some tips and tricks I’m not aware of.
(Nb: Why are rebuilds so slow? Because configure is invoked every time? Imprecise dependencies in makefiles? Too much use of recursive make? Bug 629668? Why do Fennec rebuilds seem to be slower than Firefox rebuilds?)
Kyle Huey told me how you can get away with rebuilding only parts of the browser. Eg. if I only modify code under xpcom/, I can just do
make -C <build>/xpcom && make -C <build>/toolkit/library and this reduces the rebuild time a bit. The down-side is that when I screw it up, eg. by forgetting to rebuild a directoy that I changed, it creates Frankenbuilds, and realizing what I’ve done can end up taking a lot more time than I saved.
What else can I do? Get a faster machine is the obvious option, I guess. More cores would help, though linking would still be a bottleneck. Do SSDs make a big difference?
Also, there’s talk of using more project branches and introducing mozilla-staging. That would avoid the stress of landing on mozilla-central, but that’s really the smallest part of what I’m complaining about.
Any and all suggestions are welcome! Please, I’m begging you.