Defense-VAE: A Fast and Accurate Defense Against Adversarial Attacks

N/ACitations
Citations of this article
39Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Deep neural networks (DNNs) have been enormously successful across a variety of prediction tasks. However, recent research shows that DNNs are particularly vulnerable to adversarial attacks, which poses a serious threat to their applications in security-sensitive systems. In this paper, we propose a simple yet effective defense algorithm Defense-VAE that uses variational autoencoder (VAE) to purge adversarial perturbations from contaminated images. The proposed method is generic and can defend white-box and black-box attacks without the need of retraining the original CNN classifiers, and can further strengthen the defense by retraining CNN or end-to-end finetuning the whole pipeline. In addition, the proposed method is very efficient compared to the optimization-based alternatives, such as Defense-GAN, since no iterative optimization is needed for online prediction. Extensive experiments on MNIST, Fashion-MNIST, CelebA and CIFAR-10 demonstrate the superior defense accuracy of Defense-VAE compared to Defense-GAN, while being 50x faster than the latter. This makes Defense-VAE widely deployable in real-time security-sensitive systems. Our source code can be found at https://github.com/lxuniverse/defense-vae.

Cite

CITATION STYLE

APA

Li, X., & Ji, S. (2020). Defense-VAE: A Fast and Accurate Defense Against Adversarial Attacks. In Communications in Computer and Information Science (Vol. 1168 CCIS, pp. 191–207). Springer. https://doi.org/10.1007/978-3-030-43887-6_15

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free