Robust neural machine translation with doubly adversarial inputs

138Citations
Citations of this article
361Readers
Mendeley users who have this article in their library.

Abstract

Neural machine translation (NMT) often suffers from the vulnerability to noisy perturbations in the input. We propose an approach to improving the robustness of NMT models, which consists of two parts: (1) attack the translation model with adversarial source examples; (2) defend the translation model with adversarial target inputs to improve its robustness against the adversarial source inputs. For the generation of adversarial inputs, we propose a gradient-based method to craft adversarial examples informed by the translation loss over the clean inputs. Experimental results on Chinese-English and English-German translation tasks demonstrate that our approach achieves significant improvements (2.8 and 1.6 BLEU points) over Transformer on standard clean benchmarks as well as exhibiting higher robustness on noisy data.

Cite

CITATION STYLE

APA

Cheng, Y., Jiang, L., & Macherey, W. (2020). Robust neural machine translation with doubly adversarial inputs. In ACL 2019 - 57th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (pp. 4324–4333). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/p19-1425

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free