mixSeq: A Simple Data Augmentation Method for Neural Machine Translation

13Citations
Citations of this article
59Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Data augmentation, which refers to manipulating the inputs (e.g., adding random noise, masking specific parts) to enlarge the dataset, has been widely adopted in machine learning. Most data augmentation techniques operate on a single input, which limits the diversity of the training corpus. In this paper, we propose a simple yet effective data augmentation technique for neural machine translation, mixSeq, which operates on multiple inputs and their corresponding targets. Specifically, we randomly select two input sequences, concatenate them together as a longer input as well as their corresponding target sequences as an enlarged target, and train models on the augmented dataset. Experiments on nine machine translation tasks demonstrate that such a simple method boosts the baselines by a nontrivial margin. Our method can be further combined with single-input based data augmentation methods to obtain further improvements.

Cite

CITATION STYLE

APA

Wu, X., Xia, Y., Zhu, J., Wu, L., Xie, S., & Qin, T. (2021). mixSeq: A Simple Data Augmentation Method for Neural Machine Translation. In IWSLT 2021 - 18th International Conference on Spoken Language Translation, Proceedings (pp. 192–197). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.iwslt-1.23

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free