Foiling Training-Time Attacks on Neural Machine Translation Systems

2Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Neural machine translation (NMT) systems are vulnerable to backdoor attacks, whereby an attacker injects poisoned samples into training such that a trained model produces malicious translations. Nevertheless, there is little research on defending against such backdoor attacks in NMT. In this paper, we first show that backdoor attacks that have been successful in text classification are also effective against machine translation tasks. We then present a novel defence method that exploits a key property of most backdoor attacks: namely the asymmetry between the source and target language sentences, which is used to facilitate malicious text insertions, substitutions and suchlike. Our technique uses word alignment coupled with language model scoring to detect outlier tokens, and thus can find and filter out training instances which may contain backdoors. Experimental results demonstrate that our technique can significantly reduce the success of various attacks by up to 89.0%, while not affecting predictive accuracy.

Cite

CITATION STYLE

APA

Wang, J., He, X., Rubinstein, B. I. P., & Cohn, T. (2022). Foiling Training-Time Attacks on Neural Machine Translation Systems. In Findings of the Association for Computational Linguistics: EMNLP 2022 (pp. 5935–5942). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-emnlp.409

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free