Update to the l10n team page

I’ve given the team pages on l10n.mozilla.org a good whack in the past few days. Time to share some details and get feedback before I roll it out.

The gist of it: More data in less screen space, I just folded things into rows, and made the rows slimmer. Better display of sign-off status, I separated status from progress and actions. Actions are now ordered chronologically, too.

The details? Well, see my recording where I walk you through:

View it on youtube.

Comments here or in bug 1077988.

Create your own dashboard

We have a lot of data around localizations, but it’s hard to know what people might be looking for.

I just switched a new feature live, edit your own dashboard.

You can select branches of products, as well as the localizations you’re interested in, and get data you want.

Say you’re looking for mobile and India. You’d want Firefox OS and Firefox for Android aka Fennec. The latter is actively localized on aurora, so you’d want the gaia tree and fennec_aurora. You want Assamese, Bengali, Gujarati…. and 9 other languages. Select gu and pa, too, ’cause why not.

Or are you keen on Destop in Latin America? Again we’re looking at Aurora, so fx_aurora is our tree of choice this time. Locales are Spanish in its American Variants, and Brazilian Portuguese.

Select generously, you can always reduce your selection through the controls on the right side of the resulting dashboard.

Play around, and compare the Status and History columns. Try to find stories, and share them in the comments below.

A bit more details on fx-aurora vs Firefox 32. Right now, Firefox 32 is on the Aurora channel and repository branch. Thus, selecting either gives you the same data today. In six weeks, though, 32 is going to be on beta, so if you bookmark a link, it’d give you different data then. That’s why you can only select one or the other.

Progress on l10n.mozilla.org

Today we’re launching an update to l10n.mozilla.org (elmo).

Team pages and the project overview tables now contain sparklines, indicating the progress over the past 50 days.

Want to see how a localization team is doing? Now with 100% more self-serve.

If the sparklines go up like so

good progress

the localization is making good progress. Each spark is an update (either en-US or the locale), so sparks going up frequently show that the team is actively working on this one.

If the sparklines are more like

not so much

then, well, not so much.

The sparklines always link to an interactive page, where you can get more details, and look at smaller or larger time windows for that project and that locale.

You should also look at the bugzilla section. A couple of bugs with recent activity is good. More bugs with no activity for a long time, not so much.

Known issues: We still have localizations showing status for central/nightly, even though those teams don’t work on central. Some teams do, but not all. Also, the sparklines start at some point in the past 50 days, that’s because we don’t figure out the status before. We could.

A tale of convert and convert

Or, how I made converting gaia to gaia-l10n suck less.

Background: For Firefox OS, we’re exposing a modified repository to localizers, so that it’s easier to find out what to work on, and to get support from the l10n dashboards. Files in the main gaia repository on github like


should become


and the localizable sections in manifest.webapp,

  "locales": {
    "en-US": {
      "name": "Browser",
      "description": "Gaia Web Browser"

are exposed in manifest.properties as

name: Browser
description: Gaia Web Browser

We’re also not supporting git on the l10n dashboard yet, so we need hg repositories.

I haven’t come across a competitor to hg convert on the git side yet, so I looked on the mercurial side of life. I started by glancing of the code in hgext/convert in the upstream mercurial code. That does a host of things to get parents and graphs right, and I didn’t feel like replicating that. It doesn’t offer hooks for dynamic file maps, though, let alone content rewriting. But it’s python, and it’s open-source. So I forked it.

With hg convert. Isn’t that meta? That gives me a good path to update the extension with future updates to upstream mercurial. I started out with a conversion of mercurial 2.7.1, then removed all the stuff I don’t need like bzr support etc. Then I made the mercurial code do what I need for gaia. I had to disable some checks that try to avoid commits that don’t actually change the contents, because I don’t mind that happening. And last but not least I added the filemap and the shamap of the initial conversion of hgext/convert, so that future updates don’t depend on my local disk.

Now I could just run hg gaiaconv and get what I want. Enter the legacy repositories for en-US. We only want fast-forward merges in hg, and in the conversion to git. No history editing allowed. But as you can probably guess, the new history is completely incompatible with the old, from changeset one. But I don’t mind, I hacked that.

I did run the regular hg gaiaconv up to the revision 21000 of the integration/gaia-central repository. That ended up with the graph for revision 4af36780fb5d.

gaia-central conversion

I pulled the old conversion for v1-train, which is the graph for revision ad14a618e815.

old en-US for v1-train

Then I did a no-op merge of the old graph into the new one.


That’s all good, but now future conversions via gaiaconv would still pick up the non-merged revision. Well, unless one just edits the generated shamap, and replaces all references to 4af36780fb5d with cfb28f851111. And yes, that actually works.

more conversions post-merge

Tadaaa, a fully automated conversion process, and only forward merges.

Repositories involved in this post:

git-merge 2013

I spent Friday and Saturday on git-merge, an unconference on git. Thursday was developer day, for core contributors to git itself, and libgit2/jgit. I didn’t go there. Friday was “user” day, and Saturday was hackday. I figured it might be useful to go to the userday, and turned out, it was. It wasn’t all that much about users at all. It was strictly unconference, so people would take the stage and give a quick lightening talk, and later in the day we had a few breakout sessions. The most user-like people were folks migrating to git just now, and they had a good deal of folks to talk to in the breakout session.

The rest of Friday was actually library and tool hackers talking about what they’re doing. There are notes on github.com/git-merge/user-day, but I do want to hightlight a few.

imerge is probably the most interesting to gecko hackers. It allows you to merge or rebase intensive histories with conflicts incrementally, with tons of automation. It even allows to do test runs on the merges it does automatically. If you ever resolved the same conflict 10 times in one rebase, this is for you.

libgit2 was under the hood of many, with core contributors of the library itself being there, plus the maintainers of at least the C# and the python bindings. There was also a good deal of tooling based on jgit, a pure-java implementation of git.

Much debate on java vs not, actually. And Cmd-F1. Little conflict between git and hg.

I also got to enjoy the github backend talk by vmg, with Ernie, Bert, and smoke through chimneys. I had seen a recording of it before, but it was well worth it seeing it live.

I joined the breakout session about teaching git, and had the pleasure to be in a group that tought git in various parts of the globe. Yes, that might be relevant to me, so that was good exercise. I talked with Scott Schacon about localized version of the git book, and localized github docs, too.

Given that I had the chance to talk and co-hack with all those tooling guys, I decided to drop by on Saturday for the hack day. I wanted to take the opportunity to talk about my weird repository rewrite questions, and so I did.

Saturday was great. I only got 20 lines of code written, but I finally understood what git is about in the back-end. There’s loads of hackery that you can do, and funny enough, both I and Ben ended up hacking on a repository rewrite algorithm, which helped me a lot, both talking about the structure of the code, as well as vetting the approach. His code in C# is actually in a PR. Worth a look for people that want to see how to hack tooling on top of that binding. I tried the same in python, and succeeded to some extent. David Ibáñez helped me a great deal. But it also showed me the difference between the C# API and the python bindings. If only mono was easy to get up on the mac.

On the conference itself, it was set up at the Radisson Blu next to the Berliner Dom, which is a nice venue for that size. Wifi worked, food and beer were there. It’s sad that many people claimed tickets and didn’t show up. Now you know what you missed, and what you made other people miss. Bad bunny, no chocolates! Thanks again for Jen for setting things up great.

Thanks to all the folks at git-merge for making it a great event. See you in Berlin…

Let me shake your hand

Timely for the 15 years of mozilla, let me share this with you.

I was wearing my Firefox hoodie in a store in San Francisco last week. I was trying to find out what to buy, and a young guy came by. I did let him pass, but he didn’t find what he wanted, and left the store. Just a moment later he returned and said

I’ve been using Firefox for 10 years, and I’d like to shake your hand.

First of all, feel your hand shaken. It’s obviously us and not me.

But more importantly, this happened in the US. For the longest time, folks from the US would tell those stories coming back from Europe. We’re finally there, across the globe.

And when it comes down to the mission, we’re obviously winning. I love to tell people that I work for Mozilla. And mostly everybody knows that we exist, and more importantly, knows that they have a choice between various browsers. And they’re making a choice.

So here’s to 15 more years to choice and innovation on the internet.

Risk management for releases at scale

Let me share some recent revelations I had. It all started with the infamous Berlin airport. Not the nice one in Tegel, but the BBI desaster. The one we’ve thought we’d open last year, and now we don’t know which year.

Part of the newscoverage here in Germany was all about how they didn’t do any risk analysis, and are doomed, and how that other project for the Olympics in London did do risk analysis, and got in under budget, ahead of time.

So what’s good for the Olympics can’t be bad for Firefox, and I started figuring out the math behind our risk to ship Firefox, at a given time, with loads of localizations. How likely is it that we’ll make it?

Interestingly enough, the same algorithm can also be applied to a set of features that are scheduled for a particular Firefox release. Locales, features, blockers, product-managers, developers, all the same thing :-). Any bucket of N things trying to make a single deadline have similar risks. And the same cure. So bear with me. I’ll sprinkle graphs as we go to illustrate. They’ll link to a site that I’ve set up to play with the numbers, reproducing the shown graphs.

The setup is like this: Every single item (localization, for exampe) has a risk, and I’m assuming the same risk across the board. I’m trying to do that N times, and I’m interested in how likely I’ll get all of them. And then I evaluate the impact of different amounts of freeze cycles. If you’re like me, and don’t believe any statistics unless they’re done by throwing dices, check out the dices demo.

Anyway, let’s start with 20% risk per locale, no freeze, and up to 100 locales.

80%, no freeze, up to 100

Ouch. We’re crossing 50-50 at 3 items already, and anything at scale is a pretty flat zero-chance. Why’s that? What we’re seeing is an exponential decay, the base being 80%, and the power being how often we do that. This is revelation one I had this week.

How can we help this? If only our teams would fail less often? Feel free to play with the numbers, like setting the successrate from 80% to 90%. Better, but the system at large still doesn’t scale. To fight an exponential risk, we need a cure that’s exponential.

Turns out freezes are just that. And that’d be revelation two I had this week. Let’s add some 5 additional frozen development cycles.

80%, 5 freezes, up to 100

Oh hai. At small scales, even just one frozen cycle kills risks. Three features without freeze have a 50-50 chance, but with just one freeze cycle we’re already at 88%, which is better than the risk of each individual feature. At large scales like we’re having in l10n, 2 freezes control the risk to mostly linear, 3 freezes being pretty solid. If I’m less confident and go down to 70% per locale, 4 or 5 cycles create a winning strategy. In other words, for a base risk of 20-30%, 4-5 freeze cycles make the problem for a localized release scale.

It’s actually intuitive that freezes are (kinda) exponentially good. The math is a tad more complicated, but simplified, if your per-item success rate is 70%, you only have to solve your problem for 30% of your items in the next cycle, and for 9% in the second cycle. Thus, you’re fighting scale with scale. You can see this in action on the dices demo, which plays through this each time you “throw” the dices.

Now onwards to my third revelation while looking at this data. Features and blockers are just like localizations. Going in to the rapid release cycle with Firefox 5 etc, we’ve made two rules:

  • Feature-freeze and string-freeze are on migration day from central to aurora
  • Features not making the freeze take the next train

That worked fine for a while, but since then, mozilla has grown as an organization. We’ve also built out dependencies inside our organization that make us want particular features in particular releases. That’s actually a good situation to be in. It’s good that people care, and it’s good that we’re working on things that have organizational context.

But this changed the risks in our release cycle. We started off having a single risk of exponential scale after the migration date (l10n). Today, we have features going in to the cycle, and localizations thereof. At this point, having feature-freeze and string-freeze being the same thing becomes a risk for the release cycle at large. We should think about how to separate the two to mitigate the risk for each effectively, and ship awesome and localized software.

I learned quite a bit looking at our risks, I hope I could share some of that.

Language packs are restartless now

Language packs are add-ons that you can install to add additional localizations to our desktop applications.

Starting with tomorrow’s nightly, and thus following the Firefox 18 train, language packs will be restartless. That was bug 677092, landed as 812d0ba83175.

To change your UI language, you just need to install a language pack, set your language (*), and open a new window. This also works for updates to an installed language pack. Opening a new window is the workaround for not having a reload button on the chrome window.

The actual patch turned out to be one line to make language packs restartless, and one line so that they don’t try to call in to bootstrap.js. I was optimistic that the chrome registry was already working, and rightfully so. There are no changes to the language packs themselves.

Tests were tricky, but Blair talked me through most of it, thanks for that.

(*) Language switching UI is bug 377881, which has a mock-up for those interested. Do not be scared, it only shows if you have language packs installed.

Why l10n tools should be editors instead of serializers

If your tool serializes internal state instead of editing files, it’ll do surprising things if it encounters surprising content. Like, turn

errNotSemicolonTerminated=Named character reference was not terminated by a semicolon. (Or “&” should have been escaped as “&”.)


errNotSemicolonTerminated=Named character reference was not terminated by a semicolon. (Or “” should have been escaped as “amp;”.)

And that’s for a string the localizer never touched.

(likely narro issue 316)

Notes on SemWiki

Semantic Wiki is nice, but it’s hard to wrap one’s head around. Thus, writing down some notes-to-non-self.

Most importantly, start with paper. SemWiki isn’t very forgiving if you reconsider. Once you’ve made some headway with paper, set up mediawiki locally. I’ve thrown my db away three times so far because I did do reconsider. FYI, that’s a tad tricky, here’s what works for me:

  • kill the db and recreate it
  • move LocalSettings.php away
  • load index.php, follow the configuration ’til you have a db, ignore the download
  • php maintainance/update.php to get the semwiki tables
  • Special:SMWAdmin to do tables and data update (log in again)
  • php maintainance/runJobs.php to speed up the data update

Using Special:CreateClass is nice, as it’s doing a whole lot of things for you. There is one thing it doesn’t, for Properties linking to Pages, it won’t ask you for the Form to create/edit those pages. Special:CreateProperty does, but that’s of little help. You can add that later by editing the Property page, and adding a Has default form like

This is a property of type [[Has type::Page]]. It links to pages that use the form [[Has default form::Aisle Milestone]].

Property name is the wiki page name of a property, Field name is the human readable name used in the forms created, btw. If you loose the mapping between field and property name, edit the form, and explicitly specify the property with {{{field|Summary|property=Has summary}}} or the like. Though, more likely, you dropped [[Has summary::{{{Summary|}}}]] in the template, I replaced that with just {{{Summary|}}}, which breaks stuff.

Also, you don’t want to use ‘/’ in your form names, that breaks editing URLs from your referring pages.

Oh, and yes, you don’t need to add the namespaces like Template: etc when entering the names in the semwiki forms, those are prepended automagically.

Another drawback of Special:CreateClass is, it’ll do all its work asynchonously, so you’ll need to wait a while ’til things are up for you to actually enter your data. Locally, you can speed things up with php maintainance/runJobs.php.

I’m still torn on how much I’d really like to use the free text, right now I’m using a Text property to create a summary that I can put in a prominent part of the template.

One more trick, if you’re using largely prefixed pages, like we do on wikimo, you can get pretty descent sorting if you edit the Template to have a category hook like

[[Category:Aisle/Use Case|{{#titleparts: {{PAGENAME}} | | -1 }}]]

This is for Aisle, for which I ended up creating my own semweb instead of using our standard feature pages. They have a ton of stuff I don’t need, and don’t have a few things I hope to use.