LARGE LANGUAGE MODELS “AD REFERENDUM”: HOW GOOD ARE THEY AT MACHINE TRANSLATION IN THE LEGAL DOMAIN?

6Citations
Citations of this article
35Readers
Mendeley users who have this article in their library.

Abstract

This study evaluates the machine translation (MT) quality of two state-of-the-art large language models (LLMs) against a traditional neural machine translation (NMT) system across four language pairs in the legal domain. It combines automatic evaluation metrics (AEMs) and human evaluation (HE) by professional translators to assess translation ranking, fluency and adequacy. The results indicate that while Google Translate generally outperforms LLMs in AEMs, human evaluators rate LLMs, especially GPT-4, comparably or slightly better in terms of producing contextually adequate and fluent translations. This discrepancy suggests LLMs’ potential in handling specialized legal terminology and context, highlighting the importance of human evaluation methods in assessing MT quality. The study underscores the evolving capabilities of LLMs in specialized domains and calls for reevaluation of traditional AEMs to better capture the nuances of LLM-generated translations.

Cite

CITATION STYLE

APA

Briva-Iglesias, V., Camargo, J. L. C., & Dogru, G. (2024). LARGE LANGUAGE MODELS “AD REFERENDUM”: HOW GOOD ARE THEY AT MACHINE TRANSLATION IN THE LEGAL DOMAIN? Monografias de Traduccion e Interpretacion (MonTI), (16), 75–107. https://doi.org/10.6035/MonTI.2024.16.02

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free