Effective adversarial regularization for neural machine translation

25Citations
Citations of this article
166Readers
Mendeley users who have this article in their library.

Abstract

A regularization technique based on adversarial perturbation, which was initially developed in the field of image processing, has been successfully applied to text classification tasks and has yielded attractive improvements. We aim to further leverage this promising methodology into more sophisticated and critical neural models in the natural language processing field, i.e., neural machine translation (NMT) models. However, it is not trivial to apply this methodology to such models. Thus, this paper investigates the effectiveness of several possible configurations of applying the adversarial perturbation and reveals that the adversarial regularization technique can significantly and consistently improve the performance of widely used NMT models, such as LSTM-based and Transformer-based models1.

Cite

CITATION STYLE

APA

Sato, M., Suzuki, J., & Kiyono, S. (2020). Effective adversarial regularization for neural machine translation. In ACL 2019 - 57th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (pp. 204–210). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/p19-1020

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free