3 min read
The NTEU consortium, led by Pangeanic since 2019, has successfully completed the bulk data upload to the ELRC, enabling neural machine translation...
A view of Machine Translation Evaluation tool MTET Human graders could leave some unclear evaluations unfinished (if they needed to stop and come back later), although segment evaluation done consecutively, one sentence after another was preferable. As we can see below, some language combinations (Irish Gaelic into Greek) were a challenge!
Fig. 2 Typical Evaluation Screen In order to guarantee final quality, human graders did not know which input came from the NTEU engines and which input came from a second translation by a generalist, online MT provider that was used as benchmark). They ranked each input by moving a slider from right to left and from 0 to 100. The aim was that during the evaluation, they could assess whether the machine-generated sentence adequately expressed the meaning contained in the source, that is, how close it was to how a human would have written it.
3 min read
3 min read
2 min read