Reducing Search Space of Genetic Algorithms for Fast Black Box Attacks on Image Classifiers

0Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recent research regarding the reliability of Deep Neural Networks (DNN) revealed that it is easy to produce images that are completely unrecognizable to humans, but DNNs recognize as classifiable objects with 99.99% confidence. The present study investigates the effect of search space reduction for Genetic Algorithms (GA) on their capability of purposefully fooling DNNs. Therefore, we introduce a GA with respective modifications that is able to fool neural networks trained to classify objects from well-known benchmark image data sets like GTSRB or MNIST. The developed GA is extended and thus capable of reducing the search space without changing its general behavior. Empirical results on MNIST indicate a significantly decreased number of generations needed to satisfy the targeted confidence of an MNIST image classifier (12 instead of 228 generations). Conducted experiments on GTSRB, a more challenging object classification scenario, show similar results. Therefore, fooling DNNs has found not only easily possible but can also be done very fast. Our study thus substantiates an already recognized, potential danger for DNN-based computer vision or object recognition applications.

Cite

CITATION STYLE

APA

Brandl, J., Breinl, N., Demmler, M., Hartmann, L., Hähner, J., & Stein, A. (2019). Reducing Search Space of Genetic Algorithms for Fast Black Box Attacks on Image Classifiers. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11793 LNAI, pp. 115–122). Springer Verlag. https://doi.org/10.1007/978-3-030-30179-8_9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free