Axel Hecht Mozilla in Your Language

January 20, 2012

compare-locales 0.9.4 released

Filed under: L10n,Mozilla — Tags: , — Axel Hecht @ 4:37 am

There’s yet another update to compare-locales, we’re now at 0.9.4. Please update your local installs with

pip install -U compare-locales

Changes since 0.9.3 are:

  • Catch % as error. Sadly, there’s not much more the parser reports than Invalid Token, but at least it says something. You need to escape that as %.
  • Stability fix, there was a crash on <!ENTITY "reference to &ƞǿŧ;-known entity">. Unicode is hard.

December 26, 2011

Minor update to compare-locales for mobile/android/base

Filed under: L10n,Mozilla — Tags: , , — Axel Hecht @ 6:15 am

I’ve just pushed a minor update to compare-locales to pypi and the dashboard.

The only change is that it applies the android quote tests to the files in mobile/android/base.

As always, update your local installs by

pip install -U compare-locales

December 7, 2011

compare-locales 0.9.2 released

Filed under: L10n,Mozilla — Tags: , , , — Axel Hecht @ 5:38 am

I just uploaded a new release of compare-locales to pypi, hg.m.o and github.

Changes since the last released version:

  • Support for nested l10n.inis, notably, browser/branding.
  • Errors on CSS specs, notably, if en-US is a length or min-width etc, the translation also needs to be one.
  • Warn if CSS specs don’t match in property or unit. Say, en-US gives min-width:14ex and the localization has width:120px, warn. Thanks to Rimas for the request.
  • Warn if en-US is just a number, and the localization is not.

See also the pushes on hg.m.o.

You can install/update with

pip install -U compare-locales

Next up is to use the new version on the dashboard.

It’s not part of our release automation, though. Bug 650465 met some resistance in release-drivers, IIRC, as we’d need to change what we’re shipping in 3.6. More errors means failure unless l10n-merge is on on existing builds, which effectively changes all 20 locales that have errors on 3.6.

November 21, 2011

Mozilla Europe and Mozilla in Europe

Filed under: Mozilla — Tags: , — Axel Hecht @ 4:08 am

The message below has been communicated to MozCamp Berlin attendees and Mozilla employees via email, signed by Mitchell Baker and Tristan Nitot, but this should be public, so it has been posted to mozilla.governance. We also wanted to put it on a blog so that it ends on planet.mozilla.org, but Tristan’s server is in trouble. Being a Mozilla Europe board member, with approval of Tristan and Mitchell, I’m posting it here. Feedback and discussions should happen on mozilla.governance.

At the EU MozCamp in Berlin we shared plans for further focusing and expanding Mozilla efforts in Europe – and we thought you might be interested to know what we said.

Mozilla has been widely successful in Europe. The Mozilla Mission resonates especially well with Europeans. The user base of Firefox and Thunderbird is very high, and Firefox is a well understood part of mainstream life.

What many of us don’t realize is that we have achieved this success in Europe with a very complex organizational structure — in fact, we had three different organizations, with separate and overlapping online presences (i.e. mozilla.org , mozilla.com and mozilla-europe.org ). We’ve been asking our communities and users to interact with all three, and we’ve been trying to keep content updated and synced among the three.

Then we started the “One Mozilla” program giving the world the experience of “Mozilla” – the mission and Mozilla programs – not our organizational structure. We have merged our various websites back into mozilla.org – www.mozilla.com is no more. Similarly, www.mozilla-europe.org pages are or will be merged into mozilla.org. Going forward, we are also looking at integrating innovation work across Labs and Drumbeat into the mozilla.org structure.

At the same time, we’ve paved the way for our various communities to operate as an integrated whole by building out a holistic contributor engagement program. European localizers, localizers from other geographies, our international engagement efforts, ReMo, SiGs are all working together. Along these lines, we’ve also been looking at our organizational structure in Europe.

As a result, the Board of Mozilla Europe has come to feel that the Mozilla Europe association as a separate independent entity is no longer needed. We discussed this with Mitchell, who was part of forming Mozilla Europe in the first place (though never a board member) and she agreed this is the best path forward. It became clear in this process that over the years, many of the innovations pioneered by the Mozilla Europe association have been adopted as part of our global efforts. For example, mozilla-europe.org hosted our first multi-lingual Mozilla website and created our first structured system for doing so. Today the model of localized content is woven into everything we do. And MozCamps themselves are another great example of European innovations going global.

Streamlining the global Mozilla organization by transferring initiatives from a regional entity to global team, means that the ideas incubated in Europe can now be more easily expanded on a global scale. Integrating Mozilla Europe efforts under the umbrella of the broader Mozilla organization will allow us to spend less time on bureaucracy and will give us more time to make awesome things happen. We will have clear processes around the globe to continue and expand our presence at local events, to ensure reimbursements and swag orders are easy and timely. We will have fewer web sites to keep updated – and thus more time to create compelling content. We will not do less in Europe, we can do more!

Mozilla Europe did not have paid staff for a number of years. Thus no staff is affected by the changes that will go into effect between now and the end of the year.

It is clear that Europe is an integral part of Mozilla. It’s not a regional part or a regional hub, it’s part of the core of Mozilla. To keep the momentum, we are investing in more Mozilla Spaces across Europe: Paris will be joined by spaces in London and Berlin in 2012. This means we have more room for volunteer participation as well as for paid staff. Thus as we work to significantly scale in Europe and around the world, we will continue to grow this core going forward.

November 16, 2011

Ask Pike at MozCamp Asia 2011

Filed under: Mozilla — Tags: , — Axel Hecht @ 11:48 am

I’ll be at MozCamp Asia 2011, and as I haven’t been to Asia outside of India, I figured I should talk about the things you want me to talk about, and not about what’s on my head.

Thus, the session is gonna be titled “Ask Pike”, and I’m fielding questions on google moderator. Of course, I’ll also take questions live.

See you in KL.

August 24, 2011

… sung to the tune of …

Filed under: Mozilla — Tags: — Axel Hecht @ 1:00 am

Everytime
I redesign
I cry a little

Everytime
I change my mind
I wonder why a little

Sung to the tune of that song that Simply Red covered and bug 650816.

July 22, 2011

Data models and “vom Kopf auf die Füße”

Filed under: L10n,Mozilla — Tags: , — Axel Hecht @ 4:28 am

As you all know we’re having a new release scheme. That’s all good and great for localization, but there’s one tiny little peppermint: It exposed each and every design problem in the l10n dashboard, code-named elmo these days.

As many folks wonder why I’m still talking about how the l10n dashboard needs more work, I’ll put some details out there.

The Milestone object is the thing we use to keep track of which version of a localization was shipped in which release-style build. It’s backing up views like Fennec 6 Beta 3 milestone info page, and says “we’re adding pl, and updating nl, ru, zh-TW”. That could be used for QA and verification etc.

The AppVersion object is tracking a particular release. Say, Firefox 3.6 or Firefox 6. It’s containing a series of milestones. The AppVersion objects are tied to an Application object.

The actual compare-locales builds are hooked up to a Tree object, which represents the repositories to compare for a particular application.

The trick is how all these objects are tied together. Gandalf and I designed this back in the days of the Firefox 3.6 release. Back in those days, we had loooong release cycles, with lengthy cycles even for individual milestones, and string freezes for each milestone. At that point, we’d open up sign-offs. Remember, back in the days we wouldn’t have l10n-merge on for release builds, so we could only start reviewing the localizations after string freeze. Also, we did the hg branches for a release early in the cycle, and then we would ship most of our betas from that branch, while development on central progressed merrily.

Thus, our design decisions back then were:

There’s one static repository setup for a version of an application. Umpf. Can you see how bad that is today, where we switch our repo setup every six weeks?

Whether a localizer can sign-off or not depends on whether the upcoming milestone is string frozen or not. In other words, we need to have the upcoming milestone early to begin with, which is such a hassle now that we’re doing them weekly, instead of bi-monthly. Also, with l10n-merge and string-frozen branches, all that logic just … face palm.

Localizers sign off on a version of the application, with a push to its l10n repository. Pushes are per repo, appversions are spanning repos today. I.e., I push on aurora, sign off, it’s good, the appversion migrates to beta, but the push is still on aurora.

Review actions on sign-offs are forever. Say, I r+ a sign-off on aurora, that goes to beta, but there’s a lack of traction that makes that revision really bad to ship for the next cycle. I can’t make that sign-off bad for Firefox 12 and good for Firefox 11.

Lessons learned:

  • appversions hop from tree to tree, over time
  • sign-offs are per tree, this localization at this point is good, source-wise
  • actions on sign-offs can be per appversion
  • milestones aren’t required before we actually ship something

Or, as we say in German, we have to put the design “vom Kopf auf die Füße”.

April 11, 2011

Being a localizer in the rapid release cycle

Filed under: L10n,Mozilla — Tags: , — Axel Hecht @ 6:15 pm

We’re changing to a 6-week release train model, and this is going to impact how localizers do their contributions. The following scheme has been cycled in .planning for a bit, so this is what we’ll be doing. We’ll adapt that if needed, of course, but based on experience with the next cycle or two.

Recap on the rapid release cycle: en-US developers work on mozilla-central, as they used to, and every 6 weeks, we’ll pull their contributions to another repository, called mozilla-aurora. That repository is string frozen. String changes only land in this repository as part of the merge from central to aurora. After another 6 weeks, the content goes to yet another repository, mozilla-beta. Corresponding to those, there’s l10n/mozilla-aurora and l10n/mozilla-beta. And now you know. Find a glossary at the end of this post.

There are two different localizer schemes: Early birds and friends of string freeze. Read the following descriptions and pick one for your individual localization team.

Early Birds are those localization teams that are happy to follow the mozilla-central content quickly and make sure that all issues relating to localizing that code are found and fixed. We already have a few of those that have built their reputation among our hackers to have good input to follow. We don’t need a lot of those, but the ones we have are crucial to make the plan work, and have code that is properly localizable at any time on aurora. You’ll be following the fx_central tree on the l10n dashboard to catch up on changes.

Friends of String Freeze are those teams that prefer to have stable content to localize with a decent time window to act on it. Many of our localization teams are in this group. If you’re in this group, you’ll set your calendar alarm to the next window, hg pull -u on your mozilla-aurora clone, your l10n/mozilla-aurora clone, localize, push, test, fix, push, sign-off. Then you set your calendar to the next 6-week cycle, and you’re all set. The expectation here is that the amount of strings will be rather low, so a day of l10n plus testing and fixing is fine. Usually, you should be able to deliver a great localization for the next version of Firefox in some 3 days. Firefox 5 right now is some 30 strings, other releases will be a good deal bigger. But nowhere close the 1.2k strings of Firefox 4. You’ll be watching the fx_aurora tree on the l10n dashboard to see the status of your localization.

Sign-offs will happen on aurora, in rare cases on beta. The setup where we work towards release is aurora.

What about the beta repositories? Well, I hope to not see a necessity to land on l10n/mozilla-beta for the most part. You should expect that changes you make on l10n/mozilla-beta will be dropped once we do the next update from aurora, so you want to have the fixes on both aurora and beta, if applicable. But really, you want to be good on aurora. Then beta will be fine and no hassle.

How that maps to mercurial work:

For the Friends of String Freeze, you’ll not need to worry about anything other than pulling on both repos every cycle. We’ll take your content from l10n/mozilla-aurora to l10n/mozilla-beta, and may very well at some point stop doing l10n-central builds at all for you. Just keep things simple here.

For the Early Birds, we’ll rely on you self-identifying and doing a tad of extra work. You’ll be in best shape to merge your contributions from l10n-central to l10n/mozilla-aurora, making sure that the result has all your fixes from both central and aurora, where you want them. You’re techy-geeky-savvy anyways, so that’s allright. If at some point, we learn that there’s a pattern that benefits from automation, we’ll check in on that when we get there, too. You shouldn’t have to worry about getting content on l10n/mozilla-beta anymore than the rest, though.

Glossary:
mozilla-central is the mercurial repository that en-US code is landed to as development makes progress.
l10n-central is the tree of mercurial repositories that the early-bird localizers use as development makes progress.
central is short for either, or both, of mozilla-central and l10n-central, depending on context.

The terms around mozilla-aurora, l10n/mozilla-aurora, and aurora map to their corresponding terms for central, same for mozilla-beta, l10n/mozilla-beta, and beta.

Update: Fixed the links to map to the new and stable repository locations.

November 24, 2010

compare-locales 0.9.1 is out

Filed under: L10n,Mozilla — Tags: , — Axel Hecht @ 7:30 am

I released compare-locales 0.9.1 yesterday on pypi. Do the regular

easy_install -U compare-locales

to update your local copy.

This update includes two bug-fixes compared to 0.9,

  • Don’t warn about XML-defined entities like &amp;, bug 604404
  • Ensure that merged entities have a trailing newline, bug 612619

In particular the latter will make our l10n-merge code more stable. Sadly, we actually need to fix all the newly-reported errors in all stable branches and apps before we can update the production tag. Errors make compare-locales fail, and rightfully so. And fail is bad for release builds that don’t merge, also rightfully so.

November 15, 2010

As sure as logs are logs

Filed under: Mozilla — Tags: , , — Axel Hecht @ 12:04 pm

… or not.

As promised, I’ll write a bit about build logs today. You’ll see what our logs are, and, to begin with, I’ll take you on a tour through buildMessage to explain how the logs we have end up being what you see served off of tinderbox.

First off, buildbot is basically the same thing as any regular gecko app, one main thread and loads of callbacks. So when reading on, all your spontanous reactions are good.

The buildMessage code does:

  1. synchronous IO to load all logs of a build into memory, basically up to some 70M
  2. synchronous string handling to paste all that data together, with some extra padding
  3. synchronous compression of the resulting string
  4. synchronous base64 encoding of the compressed string

All on the main thread, all in one go, blocking. All of that to give you a single lengthy unformatted blob of text. Why?

Because our build logs are actually not a single lengthy unformatted blob of text, which is what tinderbox wants.

Let’s have a peek into what our build logs are, really. In my previous posts, I introduced you to the concept of build steps. They’re really the basic entity of work to be done for a build. Now, the logs are stored in buildbot pretty much in how the data comes, that is, each log is associated with a step, and the storage is happening as the chunks arrive. Commonly, that’d be stdout and stderr data coming from shell commands run on the slave. The information about which stream the data is on is persisted, too, as is the order, so any log looks like this, basically:

Step reference header length data
stdout length data
stdout length data
stdout length data
stderr length data
stdout length data

As most of you aren’t among the few priviledged ones to actually look at the real logs, I’ve set up a fake log page for you to take a look. It’s an l10n repack, mostly because they’re somewhat small in both step count and log size, and because I’m used to them. Here’s the actual make step highlighted. You can see the introduction being shown in blue, which is the common color for header chunks. Buildbot just uses that channel to show setup and shutdown information on the step. Then there’s the actual make output in black. If there was something on stderr, it’d be styled in red. Sorry, I didn’t quickly come up with something that has stderr.

The first take-away is that you can get to just the build output of the step you’re interested in.

If you’re nostalgic, you can check the checkbox for tinderbox, the css style sheet changes to show you what you’d get from tinderbox. Try to find the information again?

One further detail, there can be more than one log per step. Buildsteps that set build properties quite commonly have two logs, one that keeps track of the command that got run, and another that keeps track of the actually changed build properties. You can look at an example in the builddir step. The boring last line is the second log.

Log files are really not all that complicated, and much more useful than what we get back from tinderbox. Let’s look at some of the pros:

Log files come in as the build goes. This enables buildbot to publish build logs in almost realtime. There’s little-to-no cost for that, too, a simple node.js proxy can ensure that only one log is read at any time. Another benefit is, one can archive logs incrementally, removing the current stress on the masters to publish more data than they want to chew in one go.

Log files are per task. As the logs are associated with a step, which has a name and a builder, there’s pretty rich information available on what the data in question is actually about. Think about hg-specific error parsers for one step, ftp-specific ones for the next, and mochitest-specific ones for the one after that. All in one build. If we’d archive the raw data, we can easily improve our parsers and be compatible with old logs. Or add new steps to the build process without fear to break existing log parsers.

Tinderbox can still be fed. Even if we’re not sending out tinderbox log mails from the masters, we can still do the processing out of band in an external process or even external machine, offload the masters, and not enforce us to change all infrastructure in one go.

There is a hard piece, too, storage. Build logs are plenty, and they’re anywhere from a dozen bytes to 70M. Within the same build, even. There a hundreds of thousands small files, and thousands of really large ones. I hope that adding some information on what our build logs really are helps to spike a design discussion on this. If to compress, on which level. Retention, per step type, even? Store as single files, in one dir, or in a hierarchy, or as tar balls? Or all of the above as part of retention? Is hbase a fit?

« Newer PostsOlder Posts »

Powered by WordPress