Kallima: A Clean-Label Framework for Textual Backdoor Attacks

7Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Although Deep Neural Network (DNN) has led to unprecedented progress in various natural language processing (NLP) tasks, research shows that deep models are extremely vulnerable to backdoor attacks. The existing backdoor attacks mainly inject a small number of poisoned samples into the training dataset with the labels changed to the target one. Such mislabeled samples would raise suspicion upon human inspection, potentially revealing the attack. To improve the stealthiness of textual backdoor attacks, we propose the first clean-label framework Kallima for synthesizing mimesis -style backdoor samples to develop insidious textual backdoor attacks. We modify inputs belonging to the target class with adversarial perturbations, making the model rely more on the backdoor trigger. Our framework is compatible with most existing backdoor triggers. The experimental results on three benchmark datasets demonstrate the effectiveness of the proposed method.

Cite

CITATION STYLE

APA

Chen, X., Dong, Y., Sun, Z., Zhai, S., Shen, Q., & Wu, Z. (2022). Kallima: A Clean-Label Framework for Textual Backdoor Attacks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13554 LNCS, pp. 447–466). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-17140-6_22

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free