A cma-es-based adversarial attack on black-box deep neural networks

6Citations
Citations of this article
26Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Deep neural networks(DNNs) are widely used in AI-controlled Cyber-Physical Systems (CPS) to controll cars, robotics, water treatment plants and railways. However, DNNs have vulnerabilities to well-designed input samples that are called adversarial examples. Adversary attack is one of the important techniques for detecting and improving the security of neural networks. Existing attacks, including state-of-the-art black-box attack have a lower success rate and make invalid queries that are not beneficial to obtain the direction of generating adversarial examples. For these reasons, this paper proposed a CMA-ES-based adversarial attack on black-box DNNs. Firstly, an efficient method to reduce the number of invalid queries is introduced. Secondly, a black-box attack of generating adversarial examples to fit a high-dimensional independent Gaussian distribution of the local solution space is proposed. Finally, a new CMA-based perturbation compression method is applied to make the process of reducing perturbation smoother. Experimental results on ImageNet classifiers show that the proposed attack has a higher success-rate than the state-of-the-art black-box attack but reduce the number of queries by 30% equally.

Cite

CITATION STYLE

APA

Kuang, X., Liu, H., Wang, Y., Zhang, Q., Zhang, Q., & Zheng, J. (2019). A cma-es-based adversarial attack on black-box deep neural networks. IEEE Access, 7, 172938–172947. https://doi.org/10.1109/ACCESS.2019.2956553

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free