Try our custom LLM ECOChat

2 min read

19/07/2012

Translation devices for conferences: Machine Translation on LocWorld

by Manuel Herranz and Kerstin Bier

Pangeanic was a guest speaker at Localization World's Moses Core Pre-Conference Day in Paris, June 2012 thanks to its efforts to make the EU-sponsored academic machine-translation development Moses a commercially viable, user friendly, web-based environment with self-training (DIY) features.   PangeaMT overcomes many of its shortcomings and is built as a solution for language service providers and translation users large and small.

The Moses platform is now the most widespread SMT translator in the world, but deployment is not for the faint-hearted, nor the amounts of data  necessary to make it work,  building language pairs, etc. This session was geared at presenting some implementation user cases from several organizations.  I will summarize a star application at Sybase.

Kerstin Bier is a member of the Sybase Technical Publications Solutions team (Sybase is now part of SAP). Kerstin has been one of the early adopters of Moses-based PangeaMT solutions working into and out of English and German since 2009. At that time, Sybase was looking at optimizing their translation processes further and felt that the traditional TM technology was fully exploited... A new focus was required beyond the fuzzy matching. Microsoft had taken Sybase data to train their own engines and presented results at a TAUS Conference in Portland.

Results were too good to be true. But they demonstrated that the statistical approach was the right way to go. Trial projects began with Pangeanic and concentrated on  small engines, one language combination, one product, limited set of content. The translation team was ready to pull the plug should it go wrong. However, trials were successful with BLUEs over 49% and 70% and higher productivity with full post-editing – that meant management buy-in.

Productivity increases soon translated into 20% cost savings. The initial training data set was not large, about an in-domain 5M words in TMX files. Whilst other presentations in Portland stated that 10M were the minimum at the time, an engine had been built for Sybase purposes and it worked fine.  Using its experience as a translation company, Pangeanic added inline markup handling (PangeaMT) and some other small features in deployment in which user interface was not required. The in-house set up was a reasonably powerful machine currently 64 –but8 CPU.

Some of the challenges outlined by Kerstin in Paris were that

  • Moses does not offer any kind of automated engine (re)training;
  • There are data (availability) issues;
  • Very little has been done to work out integration with commercial workflows (WorldServer)
  • You need an additional PE environment and overcome translator resistance.
  • Further work for custom-built Inline XML tags (as Pangeanic developed)
  • In general, there was agreement that further work within the Moses community should be encouraged á la Okapi
  • Metrics: further, deeper, wider and beyond the BLEU scores should be part of the Moses core. At the moment, only MT providers offer some kind of scoring system or "confidence scores". For example with regards to engine performance: bad output vs good output Post-editing effort/productivity increase. For example this scoring system could be used to know/have a rough indication of bad quality so that can be filtered out and not sent to PE. That would help to improve translator/PE adoption resistance.
  • Challenges of new terminology “Hybrid” content: translations mixed with EN

Moses does not offer some things, but problems can be overcome building tools around it, like Pangeanic’s: metrics, measuring output quality. It is still a “toolkit”, it needs to overcome users’ needs. MT output quality depends in your data, but can be improved greatly thru pre & post-processing.

Kerstin went on to state in her presentation that BLEU and other metrics are just averages, sometimes not relating at all with output quality. Initially, translators who had become post-editors complained and offered highly subjective evaluations, like "I had to re-translate everything".  Translator feedback has to treated with caution.

Sybase preferred METEOR as it scored very close to translator evaluation and it is segment based. This was done to workout a fair paying scheme to post-editors.
Next time you think languages, think Pangeanic

follow us on --> Follow manuelhrrnz on Twitter