Searching for an Effective Defender: Benchmarking Defense against Adversarial Word Substitution

46Citations
Citations of this article
72Readers
Mendeley users who have this article in their library.

Abstract

Recent studies have shown that deep neural network-based models are vulnerable to intentionally crafted adversarial examples, and various methods have been proposed to defend against adversarial word-substitution attacks for neural NLP models. However, there is a lack of systematic study on comparing different defense approaches under the same attacking setting. In this paper, we seek to fill the gap through comprehensive studies on the behavior of neural text classifiers trained with various defense methods against representative adversarial attacks. In addition, we propose an effective method to further improve the robustness of neural text classifiers against such attacks, and achieved the highest accuracy on both clean and adversarial examples on AGNEWS and IMDB datasets, outperforming existing methods by a significant margin. We hope this study could provide useful clues for future research on text adversarial defense. Codes are available at https://github.com/RockyLzy/TextDefender.

Cite

CITATION STYLE

APA

Li, Z., Xu, J., Zeng, J., Li, L., Zheng, X., Zhang, Q., … Hsieh, C. J. (2021). Searching for an Effective Defender: Benchmarking Defense against Adversarial Word Substitution. In EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 3137–3147). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.emnlp-main.251

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free