Do GPTs Produce Less Literal Translations?

26Citations
Citations of this article
33Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Large Language Models (LLMs) such as GPT-3 have emerged as general-purpose language models capable of addressing many natural language generation or understanding tasks. On the task of Machine Translation (MT), multiple works have investigated few-shot prompting mechanisms to elicit better translations from LLMs. However, there has been relatively little investigation on how such translations differ qualitatively from the translations generated by standard Neural Machine Translation (NMT) models. In this work, we investigate these differences in terms of the literalness of translations produced by the two systems. Using literalness measures involving word alignment and monotonicity, we find that translations out of English (E→X) from GPTs tend to be less literal, while exhibiting similar or better scores on MT quality metrics. We demonstrate that this finding is borne out in human evaluations as well. We then show that these differences are especially pronounced when translating sentences that contain idiomatic expressions.

Cite

CITATION STYLE

APA

Raunak, V., Menezes, A., Post, M., & Awadalla, H. H. (2023). Do GPTs Produce Less Literal Translations? In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 2, pp. 1041–1050). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-short.90

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free