Invisible Backdoor Attacks Using Data Poisoning in Frequency Domain

0Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

Backdoor attacks have become a significant threat to deep neural networks (DNNs), whereby poisoned models perform well on benign samples but produce incorrect outputs when given specific inputs with a trigger. These attacks are usually implemented through data poisoning by injecting poisoned samples (samples patched with a trigger and mislabelled to the target label) into the dataset, and the models trained with that dataset will be infected with the backdoor. However, most current backdoor attacks lack stealthiness and robustness because of the fixed trigger patterns and mislabelling, which humans or some backdoor defense approach can easily detect. To address this issue, we propose a frequency-domain-based backdoor attack method that implements backdoor implantation without mislabeling the poisoned samples or accessing the training process. We evaluated our approach on four benchmark datasets and two popular scenarios: no-label self-supervised and clean-label supervised learning. The experimental results demonstrate that our approach achieved a high attack success rate (above 90%) on all tasks without significant performance degradation on main tasks and robust against mainstream defense approaches.

Cite

CITATION STYLE

APA

Yue, C., Lv, P., Liang, R., & Chen, K. (2023). Invisible Backdoor Attacks Using Data Poisoning in Frequency Domain. In Frontiers in Artificial Intelligence and Applications (Vol. 372, pp. 2954–2961). IOS Press BV. https://doi.org/10.3233/FAIA230610

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free