Towards evolving robust neural architectures to defend from adversarial attacks

6Citations
Citations of this article
45Readers
Mendeley users who have this article in their library.

Abstract

Neural networks are known to misclassify a class of subtly modified images known as adversarial samples. Recently, numerous defences have been proposed against these adversarial samples; however, none have improved the robustness of neural networks consistently. Here, we propose to use adversarial samples as a function evaluation to explore for robust neural architectures that can resist such attacks. Experiments on existing neural architecture search algorithms from the literature reveal that although accurate, they are not able to find robust architectures. An essential cause for this lies in their confined search space. We were able to evolve an architecture that is intrinsically accurate on adversarial samples by creating a novel neural architecture search. Thus, the results here demonstrate that more robust architectures exist as well as opens up a new range of possibilities for the development and exploration of neural networks using neural architecture search.

Cite

CITATION STYLE

APA

Kotyan, S., & Vargas, D. V. (2020). Towards evolving robust neural architectures to defend from adversarial attacks. In GECCO 2020 Companion - Proceedings of the 2020 Genetic and Evolutionary Computation Conference Companion (pp. 135–136). Association for Computing Machinery, Inc. https://doi.org/10.1145/3377929.3389962

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free