Poison attacks against text datasets with conditional adversarially regularized autoencoder

32Citations
Citations of this article
99Readers
Mendeley users who have this article in their library.

Abstract

This paper demonstrates a fatal vulnerability in natural language inference (NLI) and text classification systems. More concretely, we present a ‘backdoor poisoning’ attack on NLP models. Our poisoning attack utilizes conditional adversarially regularized autoencoder (CARA) to generate poisoned training samples by poison injection in latent space. Just by adding 1% poisoned data, our experiments show that a victim BERT finetuned classifier’s predictions can be steered to the poison target class with success rates of > 80% when the input hypothesis is injected with the poison signature, demonstrating that NLI and text classification systems face a huge security risk.

Cite

CITATION STYLE

APA

Chan, A., Tay, Y., Ong, Y. S., & Zhang, A. (2020). Poison attacks against text datasets with conditional adversarially regularized autoencoder. In Findings of the Association for Computational Linguistics Findings of ACL: EMNLP 2020 (pp. 4175–4189). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.findings-emnlp.373

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free