Enforcing Linearity in DNN Succours Robustness and Adversarial Image Generation

2Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recent studies on the adversarial vulnerability of neural networks have shown that models trained with the objective of minimizing an upper bound on the worst-case loss over all possible adversarial perturbations improve robustness against adversarial attacks. Beside exploiting adversarial training framework, we show that by enforcing a Deep Neural Network (DNN) to be linear in transformed input and feature space improves robustness significantly. We also demonstrate that by augmenting the objective function with Local Lipschitz regularizer boost robustness of the model further. Our method outperforms most sophisticated adversarial training methods and achieves state of the art adversarial accuracy on MNIST, CIFAR10 and SVHN dataset. We also propose a novel adversarial image generation method by leveraging Inverse Representation Learning and Linearity aspect of an adversarially trained deep neural network classifier.

Cite

CITATION STYLE

APA

Sarkar, A., & Iyengar, R. (2020). Enforcing Linearity in DNN Succours Robustness and Adversarial Image Generation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12396 LNCS, pp. 52–64). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-61609-0_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free