Translation quality at Mozilla

Measuring translation quality is a shared priority

Part of what makes Mozilla projects so unique is the involvement of the community. Our community prides themselves on being experts in the Web, Mozilla, and all of Mozilla’s products. Thus, delivering high quality localizations of Mozilla products to users is not only a priority for the l10n-drivers, but one that is close to the community’s heart.

Currently, Mozilla has no criteria-based framework for evaluating a localization/translation’s accuracy or quality. Determining how to evaluate translation quality is a very difficult task because language itself is very flexible and subjective. Critical to creating a successful framework for evaluating translation quality is including elements of a project’s scope as well as the most objective pieces of language, such as orthography, grammar, and corporate style guide. A translation quality assessment framework would need to be flexible, robust, interoperable, and easy for graders to use. Developing such a framework is difficult, but I believe we’ve experienced a breakthrough!

Background of the MQM standard: Mozilla pilot projects

Translation Studies researchers from Brigham Young University (along with other North American and European organizations) set out to create a translation quality standard that would be flexible enough to accomodate project specifications and the needs of individual langauges. The Multidimensional Quality Metric (MQM) framework was born!

The MQM framework allows an organization to identify their highest priorities in assessing the translation quality of a specific project, or series of projects. The framework’s issue types are determined by the project’s organization. Graders for the project’s translations are recruited and trained on the meaning of these issue types and the process of grading a translation according to the selected issue type framework. Graders spend between 30-60 minutes per day marking the translation errors they find and categorizing them by issue type. The translation is then assigned a “score” according to the combined grades issued by the graders.

Delphine and I collaborated with these researchers to create MQM evaluation sprints for Firefox and Firefox OS localizations. Together, we found language and subject matter experts from within the university and the community to participate in these sprints. There were a total of three sprints organized over the last six months, each evaluating different languages and utilizing a variety of issue types. Participants were involved both virtually and in a physical space at Brigham Young University and were able to collaborate closely using IRC. Each individual grader evaluated between 400 – 4,000 strings in a single week!

Results of those projects

The results from these sprints were very positive. Localization teams were able to receive specific, actionable feedback on where their translations could be improved. This feedback was organized by issue type and called out specific strings as needed correction in one or more of the issue type categories. Some teams, like the French l10n team, were able to act immediately on that feedback and incorporated into their Firefox OS localization.

Community perception of MQM

Surveying the members of the community concerning their experience with the MQM framework returned very positive results. Generally speaking, localizers enjoyed how thorough the framwork’s definition was for the sprints and felt that the process was easy to understand. Many felt that the experience was valuable to their l10n work and were in favor of the l10n-drivers implementing the standard across the Mozilla l10n program.

Plans to implement MQM framework

There are preliminary plans to develop a Mozilla l10n QA tool based on the MQM framework. Ideas have included a gamified system, allowing users to choose to grade by project or by isolated issue type, and a sleak, web-based GUI complete with product screenshots for each string translation being evaluated. Some of the community has expressed interested in being involved in creating such a tool. We’ll be incorporating their feedback and getting their help as we move forward designing this MQM-based tool.

The MQM standard is governed by ASTM International. Mozilla has been invited to participate in the governing technical committee for the MQM standard, and we’re currently evaluating what our involvement in that committee would look like.

If you have any questions concerning the MQM framework at Mozilla, please feel free to email the Mozilla L10n mailing list.

4 comments on “Translation quality at Mozilla”

Post a comment

  1. Julen wrote on

    The link to “Multidimensional Quality Metric (MQM) framework” is broken, it should point to http://www.qt21.eu/mqm-definition/definition-2014-06-06.html

    Reply

    1. Jeff Beatty wrote on

      Thank you! I’ve fixed the link now.

      Reply

  2. Arle Lommel wrote on

    Thanks for writing this up, Jeff. A few minor corrections:

    1. The bit about the developers should read “The EC-funded QTLaunchPad project coordinated by DFKI together with translation studies researchers from Brigham Young University and other North American and European organizations…” BYU was a subcontractor to the project. While we certainly valued BYU’s contribution, it is important for our funding body that we get this right 🙂

    2. MQM isn’t presently governed by ASTM. We are in discussion with ASTM about how to get them involved and increase industry and public work on MQM, but some items related to future projects aren’t entirely clear yet, even though we do want to see ASTM involved with this.

    3. Also, the best URL is http://qt21.eu/mqm-definition/ as that always links to the latest version.

    Best,

    Arle

    Reply

    1. Jeff Beatty wrote on

      Thanks for the clarifications 😀

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *