Modeling Adversarial Attack on Pre-trained Language Models as Sequential Decision Making

9Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Pre-trained language models (PLMs) have been widely used to underpin various downstream tasks. However, the adversarial attack task has found that PLMs are vulnerable to small perturbations. Mainstream methods adopt a detached two-stage framework to attack without considering the subsequent influence of substitution at each step. In this paper, we formally model the adversarial attack task on PLMs as a sequential decision-making problem, where the whole attack process is sequential with two decision-making problems, i.e., word finder and word substitution. Considering the attack process can only receive the final state without any direct intermediate signals, we propose to use reinforcement learning to find an appropriate sequential attack path to generate adversaries, named SDM-ATTACK. Extensive experimental results show that SDM-ATTACK achieves the highest attack success rate with a comparable modification rate and semantic similarity to attack fine-tuned BERT. Furthermore, our analyses demonstrate the generalization and transferability of SDM-ATTACK. The code is available at https://github.com/fduxuan/SDM-Attack.

Cite

CITATION STYLE

APA

Fang, X., Cheng, S., Liu, Y., & Wang, W. (2023). Modeling Adversarial Attack on Pre-trained Language Models as Sequential Decision Making. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 7322–7336). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-acl.461

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free