Since machine learning and deep learning are largely used for image recognition in real-world applications, how to avoid adversarial attacks become an important issue. It is common that attackers add adversarial perturbation to a normal image in order to fool the models. The N-pixel attack is one of the recently popular adversarial methods by simply changing a few pixels in the image. We observe that changing the few pixels leads to an obvious difference with its neighboring pixels. Therefore, this research aims to defend the N-pixel attacks based on image reconstruction. We develop a three-staged reconstructing algorithm to recover the fooling images. Experimental results show that the accuracy of CIFAR-10 test dataset can reach 92% after applying our proposed algorithm, indicating that the algorithm can maintain the original inference accuracy on normal dataset. Besides, the effectiveness of defending N-pixel attacks is also validated by reconstructing 500 attacked images using the proposed algorithm. The results show that we have a 90% to 92% chance of successful defense, where N=1,3,5,10,and 15.
CITATION STYLE
Liu, Z. Y., Wang, P. S., Hsiao, S. C., & Tso, R. (2020). Defense against N-pixel Attacks based on Image Reconstruction. In SBC 2020 - Proceedings of the 8th International Workshop on Security in Blockchain and Cloud Computing, Co-located with AsiaCCS 2020 (pp. 3–7). Association for Computing Machinery, Inc. https://doi.org/10.1145/3384942.3406867
Mendeley helps you to discover research relevant for your work.