Mind the Style of Text! Adversarial and Backdoor Attacks Based on Text Style Transfer

86Citations
Citations of this article
88Readers
Mendeley users who have this article in their library.

Abstract

Adversarial attacks and backdoor attacks are two common security threats that hang over deep learning. Both of them harness task-irrelevant features of data in their implementation. Text style is a feature that is naturally irrelevant to most NLP tasks, and thus suitable for adversarial and backdoor attacks. In this paper, we make the first attempt to conduct adversarial and backdoor attacks based on text style transfer, which is aimed at altering the style of a sentence while preserving its meaning. We design an adversarial attack method and a backdoor attack method, and conduct extensive experiments to evaluate them. Experimental results show that popular NLP models are vulnerable to both adversarial and backdoor attacks based on text style transfer-the attack success rates can exceed 90% without much effort. It reflects the limited ability of NLP models to handle the feature of text style that has not been widely realized. In addition, the style transfer-based adversarial and backdoor attack methods show superiority to baselines in many aspects. All the code and data of this paper can be obtained at https://github.com/thunlp/StyleAttack.

Cite

CITATION STYLE

APA

Qi, F., Chen, Y., Zhang, X., Li, M., Liu, Z., & Sun, M. (2021). Mind the Style of Text! Adversarial and Backdoor Attacks Based on Text Style Transfer. In EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 4569–4580). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.emnlp-main.374

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free