An Efficient Blurring-Reconstruction Model to Defend Against Adversarial Attacks

N/ACitations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Although deep neural networks have been widely applied in many fields, they can be easily fooled by adversarial examples which are generated by adding imperceptible perturbations to natural images. Intuitively, traditional denoising methods can be used to remove the added noise but the original useful information is eliminated inevitably when denoising. Inspired by image super-resolution, we propose a novel blurring-reconstruction method to defend against adversarial attacks which consists of two period, blurring and reconstruction. When blurring, the improved bilateral filter, which we call it Other Channels Assisted Bilateral Filter (OCABF), is firstly used to remove the perturbations, followed by a bilinear interpolation based downsampling to resize the image into a quarter size. Then, in the reconstruction period, we design a deep super-resolution neural network called SrDefense-Net to recover the natural details. It enlarges the downsampled image after blurring to the same size as the original one and complements the lost information. Plenty of experiments show that the proposed method outperforms the state-of-the-art defense methods as well as less training images demanded.

Cite

CITATION STYLE

APA

Zhou, W., Wang, L., & Zheng, Y. (2020). An Efficient Blurring-Reconstruction Model to Defend Against Adversarial Attacks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12396 LNCS, pp. 491–503). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-61609-0_39

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free