TextHacker: Learning based Hybrid Local Search Algorithm for Text Hard-label Adversarial Attack

5Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

Abstract

Existing textual adversarial attacks usually utilize the gradient or prediction confidence to generate adversarial examples, making it hard to be deployed in real-world applications. To this end, we consider a rarely investigated but more rigorous setting, namely hard-label attack, in which the attacker can only access the prediction label. In particular, we find we can learn the importance of different words via the change on prediction label caused by word substitutions on the adversarial examples. Based on this observation, we propose a novel adversarial attack, termed Text Hard-label attacker (TextHacker). TextHacker randomly perturbs lots of words to craft an adversarial example. Then, TextHacker adopts a hybrid local search algorithm with the estimation of word importance from the attack history to minimize the adversarial perturbation. Extensive evaluations for text classification and textual entailment show that TextHacker significantly outperforms existing hard-label attacks regarding the attack performance as well as adversary quality. Code is available at https://github.com/JHLHUST/TextHacker.

Cite

CITATION STYLE

APA

Yu, Z., Wang, X., Che, W., & He, K. (2022). TextHacker: Learning based Hybrid Local Search Algorithm for Text Hard-label Adversarial Attack. In Findings of the Association for Computational Linguistics: EMNLP 2022 (pp. 622–637). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-emnlp.44

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free