Measuring translation quality is a shared priority
Part of what makes Mozilla projects so unique is the involvement of the community. Our community prides themselves on being experts on the Web, Mozilla, and all of Mozilla’s products. Thus, delivering high quality localizations of Mozilla products to users is not only a priority for the l10n-drivers, but one that is close to the community’s heart. For something that we all care deeply about, we have trouble collecting the required data to measure and benchmark translation quality within Mozilla.
Why do we need to measure translation quality?
It’s in Mozilla’s best interest to measure translation quality for three reasons:
- Our l10n community rocks and everyone outside of Mozilla needs to know it! Community-based translation, as a practice, is often underestimated. We have tons of anecdotes that illustrate how dedicated, skilled, and talented our community is at weaving together the perfect translations for Mozilla projects, but we can’t measure a cool story. We’re out to collect measurable data that demonstrates how awesome our l10n community is, in addition to the stories we know and love.
- Our l10n community rocks and everyone within Mozilla needs to know it! This information could help key decision-makers within Mozilla when making internal decisions that have an impact the direction of product development.
- Many of us who have attempted to bring new Mozillians into the l10n community often mention that l10n is a good place to learn, grow, and develop skills. Unfortunately, without accountability or a standard way of measuring and certifying an individual localizer’s growth, that promise becomes meaningless. Regularly gathering this data in intervals would allow the l10n-drivers to benchmark translation quality and make good on the promise that a localizer can show the world that they’re awesome through participating in Mozilla l10n.
Currently, Mozilla has no criteria-based framework for evaluating a localization/translation’s accuracy or quality. Determining how to evaluate translation quality is a very difficult task because language itself is very flexible and subjective. Critical to creating a successful framework for evaluating translation quality is including elements of a project’s scope as well as the most objective pieces of language, such as orthography, grammar, and corporate style guide. A translation quality assessment framework would need to be flexible, robust, interoperable, and easy for graders to use. Developing such a framework is difficult, but there are efforts from standards bodies working to solve that problem.
Evaluating the options
Pilot projects are a good way for us to determine the most appropriate standard and accompanying toolchain to use within the Mozilla l10n program. In June, we’ll be running another pilot project to assess the translation quality of new strings between Firefox OS 2.1 and Firefox OS 2.2 in Spanish using two different standards and their accompanying toolchains. We’ll collect data from each, analyze their efficiency in providing actionable feedback for localizers, and determine which standard and toolchain to begin implementing within the l10n program.
.
Jacob wrote on
Jeff Beatty wrote on
Jacob wrote on
Jacob wrote on
Ciaran wrote on
Jeff Beatty wrote on
Ciaran wrote on
Jeff Beatty wrote on