Ask Pike at MozCamp Asia 2011

I’ll be at MozCamp Asia 2011, and as I haven’t been to Asia outside of India, I figured I should talk about the things you want me to talk about, and not about what’s on my head.

Thus, the session is gonna be titled “Ask Pike”, and I’m fielding questions on google moderator. Of course, I’ll also take questions live.

See you in KL.

… sung to the tune of …

Everytime
I redesign
I cry a little

Everytime
I change my mind
I wonder why a little

Sung to the tune of that song that Simply Red covered and bug 650816.

Why I hate git

wokbok:django-durationRel axelhecht$ git push -f
Counting objects: 7, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (4/4), done.
Writing objects: 100% (4/4), 560 bytes, done.
Total 4 (delta 2), reused 0 (delta 0)
Unpacking objects: 100% (4/4), done.
remote: error: refusing to update checked out branch: refs/heads/master
remote: error: By default, updating the current branch in a non-bare repository
remote: error: is denied, because it will make the index and work tree inconsistent
remote: error: with what you pushed, and will require 'git reset --hard' to match
remote: error: the work tree to HEAD.
remote: error: 
remote: error: You can set 'receive.denyCurrentBranch' configuration variable to
remote: error: 'ignore' or 'warn' in the remote repository to allow pushing into
remote: error: its current branch; however, this is not recommended unless you
remote: error: arranged to update its work tree to match what you pushed in some
remote: error: other way.
remote: error: 
remote: error: To squelch this message and still keep the default behaviour, set
remote: error: 'receive.denyCurrentBranch' configuration variable to 'refuse'.
To /Users/axelhecht/src/django-durationRel
 ! [remote rejected] master -> master (branch is currently checked out)
error: failed to push some refs to '/Users/axelhecht/src/django-durationRel'

Also known as ux-jargon and completely useless.

Data models and “vom Kopf auf die Füße”

As you all know we’re having a new release scheme. That’s all good and great for localization, but there’s one tiny little peppermint: It exposed each and every design problem in the l10n dashboard, code-named elmo these days.

As many folks wonder why I’m still talking about how the l10n dashboard needs more work, I’ll put some details out there.

The Milestone object is the thing we use to keep track of which version of a localization was shipped in which release-style build. It’s backing up views like Fennec 6 Beta 3 milestone info page, and says “we’re adding pl, and updating nl, ru, zh-TW”. That could be used for QA and verification etc.

The AppVersion object is tracking a particular release. Say, Firefox 3.6 or Firefox 6. It’s containing a series of milestones. The AppVersion objects are tied to an Application object.

The actual compare-locales builds are hooked up to a Tree object, which represents the repositories to compare for a particular application.

The trick is how all these objects are tied together. Gandalf and I designed this back in the days of the Firefox 3.6 release. Back in those days, we had loooong release cycles, with lengthy cycles even for individual milestones, and string freezes for each milestone. At that point, we’d open up sign-offs. Remember, back in the days we wouldn’t have l10n-merge on for release builds, so we could only start reviewing the localizations after string freeze. Also, we did the hg branches for a release early in the cycle, and then we would ship most of our betas from that branch, while development on central progressed merrily.

Thus, our design decisions back then were:

There’s one static repository setup for a version of an application. Umpf. Can you see how bad that is today, where we switch our repo setup every six weeks?

Whether a localizer can sign-off or not depends on whether the upcoming milestone is string frozen or not. In other words, we need to have the upcoming milestone early to begin with, which is such a hassle now that we’re doing them weekly, instead of bi-monthly. Also, with l10n-merge and string-frozen branches, all that logic just … face palm.

Localizers sign off on a version of the application, with a push to its l10n repository. Pushes are per repo, appversions are spanning repos today. I.e., I push on aurora, sign off, it’s good, the appversion migrates to beta, but the push is still on aurora.

Review actions on sign-offs are forever. Say, I r+ a sign-off on aurora, that goes to beta, but there’s a lack of traction that makes that revision really bad to ship for the next cycle. I can’t make that sign-off bad for Firefox 12 and good for Firefox 11.

Lessons learned:

  • appversions hop from tree to tree, over time
  • sign-offs are per tree, this localization at this point is good, source-wise
  • actions on sign-offs can be per appversion
  • milestones aren’t required before we actually ship something

Or, as we say in German, we have to put the design “vom Kopf auf die Füße”.

Coming soon: cross-repository network graphs for hg.m.o

Did you miss the ability to look at our source code and figure out who’s working on what where? Thought that only github can do that?

Let me give you a sneak peek at what I’ve been hacking on over the weekend.

What’s the big picture of our mainline code development?

branched repos of mozilla-central

You can see the blue line of development along mozilla-central, and then branch points for the release branches, and now for beta (miramar), and aurora. That’s pretty broad, and intentionally so. If you’re interested in the back and forth on a changeset level, this is how the branch point of fx5 beta looks:

changesets around the fx5 beta branch point

Why does it say aurora? Because hg doesn’t know what you’re looking for. I determine what was branched off of what by looking at pushes, wherever a particular changeset was pushed first, wins. As beta branched off of aurora later, you see that this was the branch of aurora at the time.

Sadly, you don’t see the more interesting detail, that after that branch, the developer tools branch merged. If the database in the backend was tracking our project repos, you’d see that.

Of course, there’s also an l10n side to this. I get many questions on what to merge and where and so, and it’s hard to see what ended up in which repo, and if things are different. Let’s follow one of the l10n repos at large:

branched repositories of Japanese

There’s even more details on the rapid branches for this one, like so:

changesets around the Japanese fx5 beta branch

Many of the same landings, but with different hg changesets, and then a merge. That merge didn’t make it to miramar, though. Which is OK, that’s a one-off anyway.

Now that I wetted your appetite, bad news: Nothing of this is live yet. I’ll actually need to make a patch at least for the API in bug 659260, get review and get it live. Also, this is currently code that’s run as part of the l10n dashboard, which isn’t really intended to cover all our sources. If we want this at large, it’ll need liberation, and the resources that comes with that. The actual code is pretty easy to liberate, though.

And, graphs are made with protovis, including a custom Network-based layout to do DAGs.

Reviewing sign-offs, slightly different

Opening the magic box of l10n admin stuff:

We’re doing sign-offs, y’know? Localizers hit the l10n dashboard and click a button to say “this revision is good to ship”. Which is cool, because then they don’t need approval for every patch for release branch fixes.

And I’m reviewing the signoffs. Sounds all good, and well proven.

Enter the new release cycle. What’s new? This is a small update, on a quick turnaround. So I can’t do what I did for previous releases, and just not review the first sign-off. That was just an early beta (for most locales) and had a 1000 new strings. Sounded fair. Anyway, now we’re just doing 30 strings, so doing an incremental review against what’s on 4.0.1 is in order. So what does that mean?

  1. I need to get the revision that we’re shipping on 4.0.1. For sign-offs that update one branch, that’s all hooked up in the UI, but not for 4.0.x to 5. That’s bug 655943.
  2. The revision on 4.0.1 is on l10n-mozilla-2.0, the signoff on 5 is on a revision of l10n/mozilla-aurora. Neither need to exist in the other repo, so you can’t just use plain hg commands on a repo. That’s bug 655942.

Now, I wouldn’t be me if I wouldn’t script myself out of it, here’s the gist of it. And yes, this blog post is as much code comments as there are for that one.

Back home, a good deal lighter

Next time you see me, I’ll look a tad different. That bump on my head got removed, surgery yesterday went fine, got home today. I’ll be taking it slow today, but should be up to speed tomorrow.

Thanks to all the good wishes.

Being a localizer in the rapid release cycle

We’re changing to a 6-week release train model, and this is going to impact how localizers do their contributions. The following scheme has been cycled in .planning for a bit, so this is what we’ll be doing. We’ll adapt that if needed, of course, but based on experience with the next cycle or two.

Recap on the rapid release cycle: en-US developers work on mozilla-central, as they used to, and every 6 weeks, we’ll pull their contributions to another repository, called mozilla-aurora. That repository is string frozen. String changes only land in this repository as part of the merge from central to aurora. After another 6 weeks, the content goes to yet another repository, mozilla-beta. Corresponding to those, there’s l10n/mozilla-aurora and l10n/mozilla-beta. And now you know. Find a glossary at the end of this post.

There are two different localizer schemes: Early birds and friends of string freeze. Read the following descriptions and pick one for your individual localization team.

Early Birds are those localization teams that are happy to follow the mozilla-central content quickly and make sure that all issues relating to localizing that code are found and fixed. We already have a few of those that have built their reputation among our hackers to have good input to follow. We don’t need a lot of those, but the ones we have are crucial to make the plan work, and have code that is properly localizable at any time on aurora. You’ll be following the fx_central tree on the l10n dashboard to catch up on changes.

Friends of String Freeze are those teams that prefer to have stable content to localize with a decent time window to act on it. Many of our localization teams are in this group. If you’re in this group, you’ll set your calendar alarm to the next window, hg pull -u on your mozilla-aurora clone, your l10n/mozilla-aurora clone, localize, push, test, fix, push, sign-off. Then you set your calendar to the next 6-week cycle, and you’re all set. The expectation here is that the amount of strings will be rather low, so a day of l10n plus testing and fixing is fine. Usually, you should be able to deliver a great localization for the next version of Firefox in some 3 days. Firefox 5 right now is some 30 strings, other releases will be a good deal bigger. But nowhere close the 1.2k strings of Firefox 4. You’ll be watching the fx_aurora tree on the l10n dashboard to see the status of your localization.

Sign-offs will happen on aurora, in rare cases on beta. The setup where we work towards release is aurora.

What about the beta repositories? Well, I hope to not see a necessity to land on l10n/mozilla-beta for the most part. You should expect that changes you make on l10n/mozilla-beta will be dropped once we do the next update from aurora, so you want to have the fixes on both aurora and beta, if applicable. But really, you want to be good on aurora. Then beta will be fine and no hassle.

How that maps to mercurial work:

For the Friends of String Freeze, you’ll not need to worry about anything other than pulling on both repos every cycle. We’ll take your content from l10n/mozilla-aurora to l10n/mozilla-beta, and may very well at some point stop doing l10n-central builds at all for you. Just keep things simple here.

For the Early Birds, we’ll rely on you self-identifying and doing a tad of extra work. You’ll be in best shape to merge your contributions from l10n-central to l10n/mozilla-aurora, making sure that the result has all your fixes from both central and aurora, where you want them. You’re techy-geeky-savvy anyways, so that’s allright. If at some point, we learn that there’s a pattern that benefits from automation, we’ll check in on that when we get there, too. You shouldn’t have to worry about getting content on l10n/mozilla-beta anymore than the rest, though.

Glossary:
mozilla-central is the mercurial repository that en-US code is landed to as development makes progress.
l10n-central is the tree of mercurial repositories that the early-bird localizers use as development makes progress.
central is short for either, or both, of mozilla-central and l10n-central, depending on context.

The terms around mozilla-aurora, l10n/mozilla-aurora, and aurora map to their corresponding terms for central, same for mozilla-beta, l10n/mozilla-beta, and beta.

Update: Fixed the links to map to the new and stable repository locations.

compare-locales 0.9.1 is out

I released compare-locales 0.9.1 yesterday on pypi. Do the regular

easy_install -U compare-locales

to update your local copy.

This update includes two bug-fixes compared to 0.9,

  • Don’t warn about XML-defined entities like &, bug 604404
  • Ensure that merged entities have a trailing newline, bug 612619

In particular the latter will make our l10n-merge code more stable. Sadly, we actually need to fix all the newly-reported errors in all stable branches and apps before we can update the production tag. Errors make compare-locales fail, and rightfully so. And fail is bad for release builds that don’t merge, also rightfully so.

As sure as logs are logs

… or not.

As promised, I’ll write a bit about build logs today. You’ll see what our logs are, and, to begin with, I’ll take you on a tour through buildMessage to explain how the logs we have end up being what you see served off of tinderbox.

First off, buildbot is basically the same thing as any regular gecko app, one main thread and loads of callbacks. So when reading on, all your spontanous reactions are good.

The buildMessage code does:

  1. synchronous IO to load all logs of a build into memory, basically up to some 70M
  2. synchronous string handling to paste all that data together, with some extra padding
  3. synchronous compression of the resulting string
  4. synchronous base64 encoding of the compressed string

All on the main thread, all in one go, blocking. All of that to give you a single lengthy unformatted blob of text. Why?

Because our build logs are actually not a single lengthy unformatted blob of text, which is what tinderbox wants.

Let’s have a peek into what our build logs are, really. In my previous posts, I introduced you to the concept of build steps. They’re really the basic entity of work to be done for a build. Now, the logs are stored in buildbot pretty much in how the data comes, that is, each log is associated with a step, and the storage is happening as the chunks arrive. Commonly, that’d be stdout and stderr data coming from shell commands run on the slave. The information about which stream the data is on is persisted, too, as is the order, so any log looks like this, basically:

Step reference header length data
stdout length data
stdout length data
stdout length data
stderr length data
stdout length data

As most of you aren’t among the few priviledged ones to actually look at the real logs, I’ve set up a fake log page for you to take a look. It’s an l10n repack, mostly because they’re somewhat small in both step count and log size, and because I’m used to them. Here’s the actual make step highlighted. You can see the introduction being shown in blue, which is the common color for header chunks. Buildbot just uses that channel to show setup and shutdown information on the step. Then there’s the actual make output in black. If there was something on stderr, it’d be styled in red. Sorry, I didn’t quickly come up with something that has stderr.

The first take-away is that you can get to just the build output of the step you’re interested in.

If you’re nostalgic, you can check the checkbox for tinderbox, the css style sheet changes to show you what you’d get from tinderbox. Try to find the information again?

One further detail, there can be more than one log per step. Buildsteps that set build properties quite commonly have two logs, one that keeps track of the command that got run, and another that keeps track of the actually changed build properties. You can look at an example in the builddir step. The boring last line is the second log.

Log files are really not all that complicated, and much more useful than what we get back from tinderbox. Let’s look at some of the pros:

Log files come in as the build goes. This enables buildbot to publish build logs in almost realtime. There’s little-to-no cost for that, too, a simple node.js proxy can ensure that only one log is read at any time. Another benefit is, one can archive logs incrementally, removing the current stress on the masters to publish more data than they want to chew in one go.

Log files are per task. As the logs are associated with a step, which has a name and a builder, there’s pretty rich information available on what the data in question is actually about. Think about hg-specific error parsers for one step, ftp-specific ones for the next, and mochitest-specific ones for the one after that. All in one build. If we’d archive the raw data, we can easily improve our parsers and be compatible with old logs. Or add new steps to the build process without fear to break existing log parsers.

Tinderbox can still be fed. Even if we’re not sending out tinderbox log mails from the masters, we can still do the processing out of band in an external process or even external machine, offload the masters, and not enforce us to change all infrastructure in one go.

There is a hard piece, too, storage. Build logs are plenty, and they’re anywhere from a dozen bytes to 70M. Within the same build, even. There a hundreds of thousands small files, and thousands of really large ones. I hope that adding some information on what our build logs really are helps to spike a design discussion on this. If to compress, on which level. Retention, per step type, even? Store as single files, in one dir, or in a hierarchy, or as tar balls? Or all of the above as part of retention? Is hbase a fit?