Evolutionary algorithms deceive humans and machines at image classification: An extended proof of concept on two scenarios

5Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The range of applications of Neural Networks encompasses image classification. However, Neural Networks are vulnerable to attacks, and may misclassify adversarial images, leading to potentially disastrous consequences. Pursuing some of our previous work, we provide an extended proof of concept of a black-box, targeted, non-parametric attack using evolutionary algorithms to fool both Neural Networks and humans at the task of image classification. Our feasibility study is performed on VGG-16 trained on CIFAR-10. For any category cA of CIFAR-10, one chooses an image A classified by VGG-16 as belonging to cA . From there, two scenarios are addressed. In the first scenario, a target category ct = cA is fixed a priori. We construct an evolutionary algorithm that evolves A to a modified image that VGG-16 classifies as belonging to ct . In the second scenario, we construct another evolutionary algorithm that evolves A to a modified image that VGG-16 is unable to classify. In both scenarios, the obtained adversarial images remain so close to the original one that a human would likely classify them as still belonging to A.

Cite

CITATION STYLE

APA

Chitic, R., Leprévost, F., & Bernard, N. (2021). Evolutionary algorithms deceive humans and machines at image classification: An extended proof of concept on two scenarios. Journal of Information and Telecommunication, 5(1), 121–143. https://doi.org/10.1080/24751839.2020.1829388

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free