End of Multimodal Pattern Recognition Project Hails New Features of Machine Translation

by Manuel Herranz

The closure of the MIPRCV project (Multimodal Interactive Pattern Recognition) at the beginning of November showcased real-life industry applications from Spanish Research & Development, with examples from bank La Caixa, translation company Pangeanic for language and machine translation, Telefonica for image retrieval, etc.

All systems relied on the concept of using existing information (be it bank information like receipts, invoices, and orders, translated bilingual files, classification of web images with and without text, etc)  and processing it on the dual concept of off-line training to produce good enough models and on user interaction that generates a system feedback whence the system learns and improves automatically. This is the also the basis, for example, of Pangeanic’s DIY SMT system applied to machine translation.

Research and industry presentations ranged from prototypes to applied technology already in use in industry. One star application is the semantic information (and syntactic sometimes in the case of language) which provides very powerful cataloging and search capabilities to organizations such as banks, whilst similar techniques improve machine translation applications. Cataloging documents using semantic techniques saved bank La Caixa up to 15 minutes per transaction, thus improving productivity per bank branch and employee by reducing and automating tax, money and salary transfers, etc.

In translation, increases in productivity were measured  by production per person in number of words and hours, with Pangeanic reporting successful cases of machine translation and post-editing well over 20,000 words/ day in controlled domains.

Other applications of multimodal pattern recognition dealt with online videos automatically classified by an interactive system using tag annotation and other techniques, image retrieval from the Internet on a given subject, cooperative detection/reaction of human actions (automated security disruption detection), application of pattern recognition to handwritten texts in order to digitalize old documents, facial recognition bu robots and ubiquitous robotics (human-robot interaction), advanced driving, improving hands and finger recognition, displacement of cubical data using gestures, control screens, etc.

Pangeanic is committed to continue to expand its R&D capabilities in collaboration with large-scale scientific programs in order to include the latest from the state-of-the-art into its PangeaMT technologies.

A few examples on how multimodal systems can be used for data processing, image retrieval and  processing can be viewed in Pangeanic’s Youtube channel, including moving robots!!

Next time you think languages, think Pangeanic
Machine Translation Engines from PangeaMT

follow us on –> Follow manuelhrrnz on Twitter  @Pangeanic   @manuelhrrnz

Leave a Reply

Your email address will not be published. Required fields are marked *

nine × 5 =

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>