Machine Translation Robustness to Natural Asemantic Variation

0Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.

Abstract

Current Machine Translation (MT) models still struggle with more challenging input, such as noisy data and tail-end words and phrases. Several works have addressed this robustness issue by identifying specific categories of noise and variation then tuning models to perform better on them. An important yet under-studied category involves minor variations in nuance (non-typos) that preserve meaning w.r.t. the target language. We introduce and formalize this category as Natural Asemantic Variation (NAV) and investigate it in the context of MT robustness. We find that existing MT models fail when presented with NAV data, but we demonstrate strategies to improve performance on NAV by fine-tuning them with human-generated variations. We also show that NAV robustness can be transferred across languages and find that synthetic perturbations can achieve some but not all of the benefits of organic NAV data.

Cite

CITATION STYLE

APA

Bremerman, J., Ren, X., & May, J. (2022). Machine Translation Robustness to Natural Asemantic Variation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022 (pp. 3517–3532). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.emnlp-main.230

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free