Synthesizing parallel data of user-generated texts with zero-shot neural machine translation

4Citations
Citations of this article
82Readers
Mendeley users who have this article in their library.

Abstract

Neural machine translation (NMT) systems are usually trained on clean parallel data. They can perform very well for translating clean in-domain texts. However, as demonstrated by previous work, the translation quality significantly worsens when translating noisy texts, such as user-generated texts (UGT) from online social media. Given the lack of parallel data of UGT that can be used to train or adapt NMT systems, we synthesize parallel data of UGT, exploiting monolingual data of UGT through crosslingual language model pre-training and zero-shot NMT systems. This paper presents two different but complementary approaches: One alters given clean parallel data into UGT-like parallel data whereas the other generates translations from monolingual data of UGT. On the MTNT translation tasks, we show that our synthesized parallel data can lead to better NMT systems for UGT while making them more robust in translating texts from various domains and styles.

Cite

CITATION STYLE

APA

Marie, B., & Fujita, A. (2020). Synthesizing parallel data of user-generated texts with zero-shot neural machine translation. Transactions of the Association for Computational Linguistics, 8, 710–725. https://doi.org/10.1162/tacl_a_00341

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free