R2AD: Randomization and reconstructor-based adversarial defense for deep neural networks

10Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.

Abstract

Machine learning (ML) has been widely adopted in a plethora of applications ranging from simple time-series forecasting to computer security and autonomous systems. Despite the robustness by the ML algorithms against random noise, it has been shown that inclusion of specially crafted perturbations to the input data termed as adversarial samples can lead to a significant degradation in the ML performance. Existing defenses to mitigate or minimize the impact of adversarial samples including adversarial training or randomization are confined to specific categories of adversaries, compute-intensive and/or often lead to reduce performance even without adversaries. To overcome the shortcomings of the existing works on adversarial defense, we propose a two-stage adversarial defense technique (R2AD). To thwart the exploitation of the deep neural network by the attacker, we first include a random nullification (RNF) layer. The RNF nullifies/removes some of the features from the input randomly to reduce the impact of adversarial noise and minimizes attacker's feasibility to extract the model parameters. However, the removal of input features through RNF leads to a reduction in the performance of the ML. As an antidote, we equip the network with a Reconstructor. The Reconstructor primarily contributes to reconstructing the input data by utilizing an autoencoder network, but based on the distribution of the normal samples, thereby improving the performance, and also being robust to the adversarial noise. We evaluated the performance of proposed multi-stage R2AD on the MNIST digits and Fashion-MNIST datasets against multiple adversarial attacks including FGSM, JSMA, BIM, Deepfool, and CW attacks. Our findings report improvements as high as 80% in the performance compared to the existing defenses such as adversarial training and randomization-based defense.

Cite

CITATION STYLE

APA

Ashrafiamiri, M., Pudukotai Dinakarrao, S. M., Afandizadeh Zargari, A. H., Seo, M., Kurdahi, F., & Homayoun, H. (2020). R2AD: Randomization and reconstructor-based adversarial defense for deep neural networks. In MLCAD 2020 - Proceedings of the 2020 ACM/IEEE Workshop on Machine Learning for CAD (pp. 21–26). Association for Computing Machinery, Inc. https://doi.org/10.1145/3380446.3430628

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free