A Robust Countermeasures for Poisoning Attacks on Deep Neural Networks of Computer Interaction Systems

3Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

In recent years, human–computer interactions have begun to apply deep neural networks (DNNs), known as deep learning, to make them work more friendly. Nowadays, adversarial example attacks, poisoning attacks, and backdoor attacks are the typical attack examples for DNNs. In this paper, we focus on poisoning attacks and analyze three poisoning attacks on DNNs. We develop a countermeasure for poisoning attacks, which is Data Washing, an algorithm based on a denoising autoencoder. It can effectively alleviate the damages inflicted upon datasets caused by poisoning attacks. Furthermore, we also propose the Integrated Detection Algorithm (IDA) to detect various types of attacks. In our experiments, for Paralysis Attacks, Data Washing represents a significant improvement (0.5384) over accuracy increment, and can help IDA detect those attacks, while for Target Attacks, Data Washing makes it so that the false positive rate is reduced to just 1% and IDA can have a high accuracy detection rate of greater than 99%.

Cite

CITATION STYLE

APA

Liu, I. H., Li, J. S., Peng, Y. C., & Liu, C. G. (2022). A Robust Countermeasures for Poisoning Attacks on Deep Neural Networks of Computer Interaction Systems. Applied Sciences (Switzerland), 12(15). https://doi.org/10.3390/app12157753

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free