How to measure machine-translation quality

Many people have asked me how they can reliably use a system to measure/benchmark the quality of their translation system (rule-based, example-based or statistical). They have bought some commercial rule-based software and are trying it, building dictionaries and normalization rules or they are having a first try at what it means to deal with a Moses engine.

There are two free systems which can be used as input/output and that will give you an idea of how your system is scoring.

Some people use them to test their system versus Google Translator, raw MT output or other texts. You can use it, for example, to check how your system is doing in comparison with free GT, Systran online tools, BabelFish, etc. It may give you an idea of your progress as you customize your own tool for a particular application, taking generalist online tools as a basic reference.

The tests are not so difficult to carry out. All you will need is some help at installation stage if you are not familiar with Linux and running a few command lines. Once you get used to it, you can run progress check tests at will.

BLEU is the standard in the industry. Most MT systems will show some kind of BLEU score to prove their progress and reliability sooner or later. However, there are some drawbacks on BLEU and you may feel some of its high scores do not actually represent the same kind of improvement when you look at the translated files.

We favour Meteor at Pangeanic. It not only takes into account word-per-word occurrences. It also takes into account some linguistic tree-like info to the tests.  Your scores in BLEU will usually show as lower results in Meteor, although this is just a very wide rule-of-a-thumb. We have experienced higher Meteor scores at Pangeanic when measuring engines providing marketing texts or general translation. This is because not just the word occurrence was taken into account, but other relations (i.e. whole family). Looking at the results from a post-editing point of view, this may be more relevant because it takes little time to correct the wrong tense in a verb, a singular or a plural.

We recommend you take 60% of the BLEU score as your productivity target initially. Once an MT system is up-and-running, and it has been updated and perfected over some months, your scores and productivity will go up exponentially.

BLEU

http://www.mq.edu.au/__data/assets/file/0007/73825/0020G.xml

METEOR

http://www.cs.cmu.edu/~alavie/METEOR/

2 thoughts on “How to measure machine-translation quality

  1. t98907

    I think BLEU is simple and poor index.

    Do you know IMPACT evaluation index?
    Echizen-ya et al.: Automatic Evaluation of Machine Translation based on Recursive Aquisition of an Intuitive Common Parts Continuum, Proceedings of the Eleventh Machine Translation Summit, pp.151-158, 2007.

    How about IMPACT compared to BLEU?

    Reply
  2. Pingback: PangeaMT Webinar on Translation Customization and DIY MT | Pangeanic Translation Technologies & News

Leave a Reply

Your email address will not be published. Required fields are marked *


five × 2 =

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>