31
May 11

More stupid mercurial tricks

I think I’m missing something. How do people get those changeset URLs to paste into bugs? Ok, if I’m landing on mozilla-central or a project branch, I just get it from tbpl since I’ll be staring at it anyway. But what about some other repo? Like, say, ssh://hg.mozilla.org/users/mstange_themasta.com/tinderboxpushlog?

As usual, I coded my way around the problem before asking the question, which is stupid and backwards. But just in case there really isn’t a good way, here’s my silly hackaround. Put this in the [alias] section of your ~/.hgrc then, after landing a change, do ‘hg urls -l 3’ or similar. (That’ll give you the latest 3 changesets):

  urls = !$HG log --template='{node|short} {desc|firstline}\n' ${HG_ARGS/urls /} | perl -lpe 'BEGIN { ($url = shift) =~ s/^\w+/http/ }; s!^(?=\w+)!$url/rev/!' `hg path default`

Picking that apart, it removes the misfeature that $HG_ARGS contains the command you’re running, then passes the remaining command line to hg log with a template set to just print out the changeset shorthash and the first line of the commit message. It sends that and the URL of the default upstream repo through a perl command that rewrites the hg log output to “/rev/ “. Oh, it changes the first part of the repo URL to http because my test case is actually an SSH url and there just happens to be an HTTP server at the same url.

A mess, but it works for me.

And yes, I should switch to a blog that isn’t hostile to code. Sorry about that line up there.


26
May 11

Record your freshness

I often like to split patches up into independent pieces, for ease of reviewing by both reviewers and myself. You can split off preparatory refactorings, low-level mechanism from high-level users, features from tests, etc., making it much easier to evaluate the sanity of each piece.

But it’s something of a pain to do. If I’ve been hacking along and accumulated a monster patch, with stock hg and mq I’d do:

  hg qref -X '*'                  # get all the changes in the working directory; only
                                  # needed if you've been qref'ing along the way
  hg qref -I '...pattern...'      # put in any touched files
  hg qnew temp                    # stash away the rest so you can edit the patch
  hg qpop
  hg qpop                         # go back to unpatched version
  emacs $(hg root --mq)/patchname # hack out the pieces you don't want,
                                  # put them in /tmp/p or somewhere...
  hg qpush                        # reapply just the parts you want
  patch -p1 < /tmp/p
  ...                             # you get the point. There'll be a qfold somewhere in here...

and on and on. It’s a major pain. I even started working on a web-based patch munging tool because I was doing it so often.

Then I discovered qcrecord, part of the crecord extension. It is teh awesome with a capital T (and A, but this is a family blog). It gives you a mostly-spiffy-but-slightly-clunky curses (textual) interface to select which files to include, and within those files which patch chunks to include, and within those chunks which individual lines to include. That last part, especially, is way cool — it lets you do things that you’d have to be crazy to attempt working with the raw patches, and are a major nuisance with the raw files.

Assuming you are again starting with a huge patch that you’ve been qreffing, the workflow goes something like:

  hg qref -X '*'
  hg qcrecord my-patch-part1
  hg qcrecord my-patch-part2
  hg qcrecord my-patch-part3
  hg qpop -a
  hg qrm original-patchname
  hg qpush -a

Way, way nicer. No more dangerous direct edits of patch files. But what’s that messy business about nuking the original patch? Hold that thought.

Now that you have a nicely split-up patch series, you’ll be wanting to edit various parts of it. As usual with mq, you qpop or qgoto to the patch you want to hack on, then edit it, and finally qref (qrefresh). But many times you’ll end up putting in some bits and pieces that really belong in the other patches. So if you were working on my-patch-part2 and made some changes that really belong in my-patch-part3, you do something like:

  hg qcrecord piece-meant-for-part3             # only select the part intended for part3
  hg qnew remaining-updates-for-part2           # make a patch with the rest of the updates, to go into part2
  hg qgoto my-patch-part2
  hg qpush --move remaining-updates-for-part2   # now we have part2 and its updates adjacent
  hg qpop
  hg qfold remaining-updates-for-part2          # fold them together, producing a final part2
  hg qpush
  hg qfold my-patch-part3                       # fold in part3 with its updates from the beginning
  hg qmv my-patch-part3                         # and rename, mangling the comment

or at least, that’s what I generally do. If I were smarter, I would use qcrecord to pick out the remaining updates for part2, making it just:

  hg qcrecord more-part2    # select everything intended for part2
  hg qnew update-part3      # make a patch with the rest, intended for part3
  hg qfold my-patch-part3   # fold to make a final part3
  hg qmv my-patch-part3     # ...with the wrong name, so fix and mess up the comment
  hg qgoto my-patch-part2
  hg qfold more-part2       # and make a final part2

but that’s still a mess. The fundamental problem is that, as great as qcrecord is, it always wants to create a new patch. And you don’t.

Enter qcrefresh. It doesn’t exist, but you can get it by replacing your stock crecord with

  hg clone https://sfink@bitbucket.org/sfink/crecord # Obsolete!

Update: it has been merged into the main crecord repo! Use

  hg clone https://bitbucket.org/edgimar/crecord

It does the obvious thing — it does the equivalent of a qrefresh, except it uses the crecord interface to select what parts should end up in the current patch. So now the above is:

  hg qcref                 # Keep everything you want for the current patch
  hg qnew update-part3
  hg qfold my-patch-part3
  hg qmv my-patch-part3

Still a little bit of juggling (though you could alias the latter 3 commands in your ~/.hgrc, I guess.) It would be nice if qfold had a “reverse fold” option.

Finally, when splitting up a large patch you often want to keep the original patch’s name and comment, so you’d really do:

  hg qcref                 # keep just the parts you want in the main patch
  hg qcrec my-patch-part2  # make a final part2
  hg qcrec my-patch-part3  # make a final part3

And life is good.


17
May 11

mozilla-central automated landing proposal

This was originally a post to the monster thread “Data and commit rules” on dev-planning, which descended from the even bigger thread “Proposing a tree rule change for mozilla-central”. But it’s really an independent proposal, implementable with or without the changes discussed in those threads. It is most like Ehsan’s automated landing proposal but takes a somewhat different approach.

  • Create a mozilla-pending tree. All pushes are queued up here. Each gets its own build, but no build starts until the preceding push’s build is complete and successful (the tests don’t need to succeed, nor even start.) Or maybe mostly complete, if we have some slow builds.
  • Pushers have to watch their own results, though anyone can star on their behalf.
  • Any failures are sent to the pusher, via firebot on IRC, email, instant messaging, registered mail, carrier pigeon, trained rat, and psychic medium (in extreme circumstances.)
  • When starring, you have to explicitly say whether the result is known-intermittent, questionable, or other. (Other means the push was bad.)
  • When any push “finishes” — all expected results have been seen — then it is eligible to proceed. Meaning, if all results are green or starred known-intermittent, its patches are automatically pushed to mozilla-central.
  • Any questionable result is automatically retried once, but no matter what the outcome of the new job is, all results still have to be starred as known-intermittent for the push to go to mozilla-central.
  • Any bad results (build failures or results starred as failing) cause the push to be automatically backed out and all jobs for later pushes canceled. The push is evicted from the queue, all later pushes are requeued, and the process restarts at the top.
  • When all results are in, a completion notification is sent to the pusher with the number of remaining unmarked failures

Silly 20-minute Gimped-up example:

  1. Good1 and Good2 are queued up, followed by a bad push Bad1
  2. The builds trickle in. Good1 and Good2 both have a pair of intermittent oranges.
  3. The pusher, or someone, stars the intermittent oranges and Good1 and Good2 are pushed to mozilla-central
  4. The oranges on Bad1 turn out to be real. They are starred as failures, and the push is rolled back.
  5. All builds for Good3 and Good4 are discarded. (Notice how they have fewer results in the 3rd line?)
  6. Good3 gets an unknown orange. The test is retriggered.
  7. Bad1 gets fixed and pushed back onto the queue.
  8. Good3’s orange turns out to be intermittent, so it is starred. That is the trigger for landing it on mozilla-central (assuming all jobs are done.)

To deal with needs-clobber, you can set that as a flag on a push when queueing it up. (Possibly on your second try, when you discover that it needs it.)

mozilla-central doesn’t actually need to do builds, since it only gets exact tree versions that have already passed through a full cycle.

On a perf regression, you have to queue up a backout through the same mechanism, and your life kinda sucks for a while and you’ll probably have to be very friendly with the Try server.

Project branch merges go through the same pipeline. I’d be tempted to allow them to jump the queue.

You would normally pull from mozilla-pending only to queue up landings. For development, you’d pull mozilla-central.

Alternatively, mozilla-central would pull directly from the relevant changeset on mozilla-pending, meaning it would get all of the backouts in its history. But then you could use mozilla-pending directly. (You’d be at the mercy of pending failures, which would cause you to rebase on top of the resulting backouts. But that’s not substantially different from the alternative, where you have perf regression-triggered backouts and other people’s changes to contend with.) Upon further reflection, I think I like this better than making mozilla-central’s history artificially clean.

The major danger I see here is that the queue can grow arbitrarily. But you have a collective incentive for everyone in the queue to scrutinize the failures up at the front of the queue, so the length should be self-limiting even if people aren’t watching their own pushes very well. (Which gets harder to do in this model, since you never know when your turn will come up, and you’re guaranteed to have to wait a whole build cycle.)

You’d probably also want a way to step out of the queue when you discover a problem yourself.

Did I just recreate Ehsan’s long-term proposal? No. For one, this one doesn’t depend on fixing the intermittent orange problem first, though it does gain from it. (More good pushes go through without waiting on human intervention.)

But Ehsan’s proposal is sort of like a separate channel into mozilla-central, using the try server and automated merges to detect bit-rotting. This proposal relies on being the only path to mozilla-central, so there’s no opportunity for bitrot.

What’s the justification for this? Well, if you play fast and loose with assumptions, it’s the optimal algorithm for landing a collection of unproven changes. If all changes are good, you trivially get almost the best pipelining of tests (the best would be spawning builds immediately). With a bad change, you have to assume that all results after that point are useless, so you have no new information to use to decide between the remaining changes. There are faster algorithms that would try appending pushes in parallel, but they get more complicated and burn way more infrastructural resources. (Having two mozilla-pendings that merge into one mozilla-mergedpending before feeding into mozilla-central might be vaguely reasonable, but that’s already more than my brain can encompass and would probably make perf regressions suck too hard…)

Side question: how many non-intermittent failures happen on Windows PGO builds that would not happen on (faster) Windows non-PGO builds?