Recently, I wrote about how I use multiple (10!) clones of the mozilla-inbound repository, with one Mercurial queue per clone, to work on multiple changes to the Mozilla codebase concurrently.
At times, I’ve felt almost guilty about using such a heavyweight branching mechanism, as opposed to a lightweight (i.e. intra-clone) branching mechanism such as git branches, or Mercurial bookmarks, or multiple Mercurial queues in a single clone (managed via
hg qqueue). It seemed clumsy, like I was missing out on a compelling feature of modern version control systems.
But I now have come to understand that each approach is appropriate in a particular circumstances. In parcticular, lightweight branches are not appropriate when code modifications incur a non-trivial build-time cost.
For this developer, lightweight branches are entirely appropriate, because they can switch between branches with hardly a care in the world.
In contrast, consider a Mozilla developer (such as me!) who works mostly on C++ code within Gecko. After modifying code, this developer incurs a decidedly non-zero build cost — on my machine, just linking libxul takes around ten seconds. So any change to Gecko’s C++ code will require at least this much time, and it’s often substantially more, especially for anyone using a slow machine and/or OS.
For this developer, lightweight branches are not appropriate, because they will have to wait for rebuilding to occur every time they switch. ccache mitigates this problem, but it doesn’t solve it. In particular, the developer may well have switched away from one branch precisely because they are waiting for a long-running build to complete, and lightweight branches certainly don’t allow that.
These two distinct cases may be obvious to some people, but they weren’t to me. If nothing else, as someone who mostly works on C++ Mozilla code, I now can feel content with the heavyweight branching approach I use.
6 replies on “Lightweight branches aren’t always appropriate”
I think if you get fancy with scripts and whatnot then you can use different build directories for different branches, and then you’d avoid the overhead. I’m not sure how you maintain the branch to objdir mapping, though. I think johns does something like this.
I used to use a bunch of source directories, inspired by your post from a few years ago, but now that I got my new very fast machine I just don’t worry about it, given that it doesn’t take more than 5 or so minutes to rebuild if there’s a lot of ccaching.
Ok, let’s assume you have different build dirs for different branches. So if you switch from one branch to another, and then back to the first one, your builddir and srcdir are in a consistent state. But make (or mach) doesn’t know this; it just looks at timestamps and sees that some source files have been modified and so it assumes they need to be rebuilt, even though in this particular case they don’t. (If you had a smarter build system that looked at contents, you’d be ok. As mentioned, ccache does this, but it’s not ideal because hits still take time.)
And this solution won’t help with doing coding while waiting for a build on another branch, because with lightweight branches you’ve always only got one srcdir.
The main issue here is having your build directory also be your working directory, meaning actions like changing branches clobbers everything from the build’s perspective.
How I handle it is with a script that allows me to set my current build config along with a build directory name, then a build wrapper that creates a separate git working directory and objdir for just that config. E.g. |moz ff-dbg central && mb| would produce a build directory |moz/moz-build-central| with objdir |moz/ff-dbg-central|.
Whenever I run a build it creates a temporary commit of any uncommited changes (with git stash) and checks that out in the build directory. This ensures only files that changed between two builds are touched, regardless of what I do in my main repository in between. It also lets me continue working in my main repo once the build has begun.
My scripts probably aren’t super portable, but if you want to take a look, the moz() command that chooses a mozconfig and sets a few env vars that can be used in PS1 and subsequent scripts is at:
And my build script:
Git has a few features (stashes, multiple workdirs) that make this easy, but it could be implemented in mercurial just the same with some use of |hg diff| and |patch|, or even just |rsync –ignore=.hg| to keep the build directory sync’d.
Unfortunately, our build system still isn’t great, and so even as a front-end dev, there is no quick way to update tiny bits of code that only need preprocessing, or even nothing at all. I recently had to manually nuke bits of my objdir and do a complete ./mach build just to get a change in jar.mn to show up (hello, Windows).
And I do actually use separate objdirs, by simply exporting the right MOZCONFIG when I ./mach build. OTOH, I mostly just use fx-team for front-end things, and then use m-i when I foray into (mostly) compiled code.
It depends on the file. I often modify aboutMemory.js, and it doesn’t require any build-time actions after modification, because the builddir contains a symlink back to the copy in the srcdir. And for some other files you just have to copy from the srcdir to the builddir after changes are made.
Lightweight vs. Heavyweight is kind of orthogonal to your issue. What you want is not multiple local repositories/branches. What you want is multiple working copies. Wih git, it’s easy to setup. IIRC, mercurial doesn’t support that very well, if at all.