In recent years, human–computer interactions have begun to apply deep neural networks (DNNs), known as deep learning, to make them work more friendly. Nowadays, adversarial example attacks, poisoning attacks, and backdoor attacks are the typical attack examples for DNNs. In this paper, we focus on poisoning attacks and analyze three poisoning attacks on DNNs. We develop a countermeasure for poisoning attacks, which is Data Washing, an algorithm based on a denoising autoencoder. It can effectively alleviate the damages inflicted upon datasets caused by poisoning attacks. Furthermore, we also propose the Integrated Detection Algorithm (IDA) to detect various types of attacks. In our experiments, for Paralysis Attacks, Data Washing represents a significant improvement (0.5384) over accuracy increment, and can help IDA detect those attacks, while for Target Attacks, Data Washing makes it so that the false positive rate is reduced to just 1% and IDA can have a high accuracy detection rate of greater than 99%.
CITATION STYLE
Liu, I. H., Li, J. S., Peng, Y. C., & Liu, C. G. (2022). A Robust Countermeasures for Poisoning Attacks on Deep Neural Networks of Computer Interaction Systems. Applied Sciences (Switzerland), 12(15). https://doi.org/10.3390/app12157753
Mendeley helps you to discover research relevant for your work.