Augmenting Large Language Model Translators via Translation Memories

9Citations
Citations of this article
26Readers
Mendeley users who have this article in their library.

Abstract

Using translation memories (TMs) as prompts is a promising approach to in-context learning of machine translation models. In this work, we take a step towards prompting large language models (LLMs) with TMs and making them better translators. We find that the ability of LLMs to “understand” prompts is indeed helpful for making better use of TMs. Experiments show that the results of a pre-trained LLM translator can be greatly improved by using high-quality TM-based prompts. These results are even comparable to those of the state-of-the-art NMT systems which have access to large-scale in-domain bilingual data and are well tuned on the downstream tasks.

Cite

CITATION STYLE

APA

Mu, Y., Reheman, A., Cao, Z., Fan, Y., Li, B., Li, Y., … Zhu, J. (2023). Augmenting Large Language Model Translators via Translation Memories. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 10287–10299). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-acl.653

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free