Advantages and Disadvantages of LLM Machine Translation versus Neural Machine Translation

Written by Manuel Herranz | 07/28/23

February 2025 Update

Several readers have pointed to the confusing terminology "LLM translation" and "NMT translation" in media, talks, and reports. Gartner and industry analysts, for example, only speak about neural machine translation, although technical solutions and users in general seem to move comfortably to the concept of "LLM translation" or "AI Translation" for either raw MT or customized, adapted MT. Pangeanic has been recognized in Gartner's Hype Cycle for Language Technologies for two consecutive years (2023 and 2024), where the technology is generally understood as "neural". 

Why is it confusing?

"LLM translation" and "NMT" are based on neural networks and, typically, the Transformers technology. The distinction is about specialization and training data, not the fundamental technology; thus, the terminology can be confusing. It is, in fact, a common source of misunderstanding, even among professionals. Understanding the difference between various machine translation approaches can be confusing because they all utilize neural networks, although they operate differently. The confusion stems from terminology overlap and underlying technical similarities. All modern machine translation is neural-based, but the key distinction lies in how these neural networks are trained and deployed. "Neural Machine Translation" (NMT) refers to task-specific, supervised neural models trained exclusively for translation using parallel data that directly maps source language to target language.

In contrast, "LLM translation" is performed by general-purpose neural models trained on a vast array of tasks including classification, auto-completion, NER, translation, summarization, question-answering, and more. While LLMs typically benefit from much larger datasets, they lack the specialized focus of dedicated NMT systems. This fundamental difference in training approach and specialization creates confusion when discussing "neural" translation, as both methods use neural networks but with significantly different architectures, training methodologies, and performance characteristics.

Neural Machine Translation (NMT)

Large Language Models (LLMs)

Machine translation systems are trained end-to-end on parallel bilingual corpora (source-target sentence pairs).

LLMs are very large neural networks (e.g., GPT-4, Claude, Gemini, Llama, DeepSeek, PaLM) trained on huge monolingual and some bilingual/multilingual data, usually in a self-supervised way (predicting the next token/word).

These models are always neural (e.g., encoder-decoder transformers), and were a significant step up from statistical MT.

Translation is one of many tasks LLMs can perform, but even though they are neural models, LLMs are not specialized for translation.

Examples: Google Translate (traditional NMT model), MarianNMT, Fairseq models.Typically translating whole sentences or several sentences (attention window).

When you ask an LLM to translate, you’re leveraging its “generalist” capabilities, not a model optimized explicitly for translation.



Neural Machine Translation (NMT) and Large Language Models (LLMs) are based on neural network architectures; their training methodologies and application domains have recently begun to converge. "Traditional" NMT models (Sequence-to-Sequence or Seq2Seq) are trained from randomly initialized weights using large-scale parallel corpora, with the objective of optimizing translation accuracy between source and target languages. This does not happen with general LLMs, which are trained to generate endlessly (and thus hallucinate). In recent years, however, the machine translation community has increasingly adopted strategies from the broader field of Natural Language Processing, particularly using pre-trained language models. This trend is motivated by the demonstrated effectiveness of models such as BERT, which are pre-trained on large amounts of monolingual data and fine-tuned for specific tasks. In the context of NMT, this approach is particularly valuable for low-resource languages, where parallel data is scarce. Models such as mBART exemplify this methodology by first undergoing multilingual pre-training to recover masked tokens and then being fine-tuned on translation tasks, thereby leveraging transfer learning to improve performance.

In parallel, large generative language models have shown the ability to perform translation without explicit fine-tuning, using prompt-based or “zero-shot” approaches. This has been evaluated systematically since 2023, including studies with GPT-3.5 and GPT-4. Research indicates that these models can produce fluent and competitive translations in high-resource language pairs, particularly when translating into English. However, results are less consistent for low-resource languages, a limitation attributable to the predominance of English and high-resource language data in the training corpus of these models. Benchmarking initiatives such as WMT23 have found that GPT-4 performs comparably to specialized NMT systems for certain translation directions, though domain-specific or low-resource scenarios continue to pose challenges.

In short:

  1.  All modern MT is neural—the difference is how the neural network is trained and used.
  2. “Neural Machine Translation” (NMT) = a task-specific, supervised neural model. Trained only for translation (parallel data, source→target).
  3. “LLM translation” = Also a neural network, trained for everything, including translation, summarization, Q&A, etc., usually with much more data but less task-specific focus. The translation performed by a general-purpose neural model.

Key Points in this Article:

  1. Training and Functionality: LLMs are trained to predict the next word in a sentence, enabling them to generate coherent text across various contexts. In contrast, NMT systems are specifically trained on bilingual text pairs to perform direct translations between languages.  
  2. Strengths and Weaknesses: LLMs offer versatility and fluency in generating translations, making them suitable for tasks requiring natural language generation. However, they may lack the precision and control found in NMT systems, which are optimized for accuracy in direct translations.  

 

Fun Fact: Did you know that many LLM scientists like Ilya Sutskever (ex OpenAI CTO) or Adian Gomez (Cohere CEO) began their days in machine translation? Gomez was part of the Google Translate team. Sutskever wrote and co-wrote several papers on machine translation and encoder-decoder technologies before joining OpenAI. This reflects how closely related both technologies are.

 

Alan Turing Statue in Sackville Park, Manchester (UK)

 

 

Further Reading: What is AI and what is Artificial General Intelligence? (AGI) 

 

Original Article:

2023 will be remembered for the two methodologies that have emerged as leaders: Large Language Models (LLMs) and Neural Machine Translation (NMT) systems.

While both have revolutionized the process of automatic language translation, they present unique advantages and limitations. This blog post aims to provide a comparative analysis of these two techniques to help elucidate their role in the evolving landscape of machine translation. 

Content

  1. Large Language Models (LLMs)
     
    1. Advantages of machine translation through LLMs.

    2. Disadvantages of machine translation through LLMs.
  1. Neural Machine Translation (NMT)
     
    1. Advantages of Neural Machine Translation.
       
    2. Disadvantages of Neural Machine Translation.
  1. Key takeaways.

 

Large Language Models (LLMs) 


LLMs, such as ChatGPT, utilize the potential of large-scale language models trained on vast amounts of text data. They excel in producing grammatically accurate translations and demonstrating a strong grasp of language structure. 

Advantages of machine translation through LLMs 

  1. Comprehensibility: LLMs tend to produce translations that are grammatically correct and easy to comprehend.

  2. Adaptability: Owing to their training on diverse corpora, LLMs are well-equipped to handle a broad range of topics and language styles.

  3. Improvement Over Time: As LLMs continue to learn from user interactions and a broadening corpus of text, their performance can improve over time.

Disadvantages of machine translation through LLMs 

Despite their generative versatility, large language models have certain disadvantages compared to traditional Sequence-to-Sequence neural machine translation.  

  1. Domain-Specific Translation: Large Language Models (LLMs) like GPT-3 are trained on a vast corpus of general text, which makes them fairly adept at general-purpose translations. However, when it comes to translating texts from specific domains, especially technical or scientific domains, LLMs may struggle due to the lack of specialized training data.

  2. On the other hand, custom-built translation models can be trained on domain-specific corpora, enabling them to produce more accurate translations for texts in their specialty areas.

  3. Hallucinations: Large Language Models are generative. They have been trained to generate text (like massive auto-completion machines). Therefore, their natural tendency is to "say more" (what we know as hallucinations). It can be problematic to spot such errors because the text reads very fluently, although more fluently in some languages than others. Suddenly, a word in a list may be missing, a concept added, or a term substituted for what the LLM considers a better style or statistically more relevant. Human proofreaders may find such changes unfit, surprising, or may not realize that a concept or an item in a list is missing.

  4. Limitations on the Availability of Language Pairs: LLMs are often trained on a large number of language pairs, which can dilute their proficiency in any specific pair. Custom-built translation models, however, are usually designed for specific language pairs, allowing them to achieve a higher level of performance in translating between those languages.

  5. Handling of Idiomatic Expressions: LLMs systems are typically better equipped to translate idiomatic expressions and colloquial language, capturing subtler aspects of language.

  6. Quality Control and Consistency: Since LLMs generate translations based on probabilistic models, their outputs have a degree of unpredictability, which can lead to inconsistencies in the translation. Custom-built translation models often have more predictable behavior and produce more consistent translations, as they are designed specifically for the task of translation and can incorporate mechanisms to ensure consistency across longer texts.

  7. Lack of Cultural Nuance: LLMs may fail to capture the cultural nuances and idioms unique to specific languages or regions.

Recommended Reading: 

Generative AI: Privacy, Data Protection and How can it Affect your Business

 

Neural Machine Translation (NMT) 


NMT systems translate languages using artificial neural networks, such as Recurrent Neural Networks (RNNs) and Transformer-based models. They are known for their ability to consider broader context and manage idiomatic expressions.

Advantages of Neural Machine Translation (NMT) 

  1. Domain Understanding: Thanks to their deep learning capabilities, NMT systems can be trained to solve specific translation challenges (technical and engineering translations, medical and healthcare translations, legal, etc.), leading to more accurate translations. 
     
  2. Handling of Idiomatic Expressions: NMT often struggles with idiomatic expressions and slang, as these language constructs are highly culture-specific and vary greatly even within a language. Custom-built translation models and LLM, especially those trained on corpora that include a substantial amount of colloquial language, can be more adept at translating such expressions accurately.

  3. Consistency: NMT systems often produce more consistent translations, especially across longer texts or documents.

Disadvantages of Neural Machine Translation (NMT) 

  1. Context fluency: NMT models translate sentence by sentence, which can be problematic in contextual languages such as Japanese or situations like conversations and dialogue. There may be gender or formality disagreements,  

  2. Struggle with Low-Resource Languages: Like LLMs, NMT systems can struggle to translate low-resource languages accurately due to limited training data.

  3. Inflexibility: Once an NMT system has been trained, it can be difficult to adjust or adapt its performance without complete retraining, which can be costly in terms of time and resources. This was made easy and affordable by Pangeanic's Deep Adaptive.

Key Takeaways 

 It is important to note that although we have distinguished between LLMs and NMTs here for comparative purposes, there is a significant overlap between both technologies.

Many modern LLMs are created using neural network architectures similar to NMTs. The main difference lies in their training and use: LLMs are trained to predict the next word in a sentence and can generate text, while NMT systems are trained explicitly on bilingual text pairs to translate from one language to another. 

LLM and NMT systems have their strengths and weaknesses. To get the most out of these technologies, it is essential to understand their respective capabilities and to choose the most appropriate approach depending on the context and the specific requirements. 

While LLMs offer high adaptability and can generate grammatically correct texts, they may lack cultural nuances and have limited contextual understanding. NMT systems, with their strong contextual understanding and competent manipulation of language expressions, may require more resources and be less flexible. 

In conclusion, both systems have made substantial contributions to the field of machine translation, each with its unique strengths and weaknesses. As the field continues to advance, combining these two techniques, leveraging the strengths of each, could pave the way toward even more accurate and nuanced machine translation systems.