An empirical study of derivative-free-optimization algorithms for targeted black-box attacks in deep neural networks

3Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

We perform a comprehensive study on the performance of derivative free optimization (DFO) algorithms for the generation of targeted black-box adversarial attacks on Deep Neural Network (DNN) classifiers assuming the perturbation energy is bounded by an ℓ∞ constraint and the number of queries to the network is limited. This paper considers four pre-existing state-of-the-art DFO-based algorithms along with a further developed algorithm built on BOBYQA, a model-based DFO method. We compare these algorithms in a variety of settings according to the fraction of images that they successfully misclassify given a maximum number of queries to the DNN. The experiments disclose how the likelihood of finding an adversarial example depends on both the algorithm used and the setting of the attack; algorithms limiting the search of adversarial example to the vertices of the ℓ∞ constraint work particularly well without structural defenses, while the presented BOBYQA based algorithm works better for especially small perturbation energies. This variance in performance highlights the importance of new algorithms being compared to the state-of-the-art in a variety of settings, and the effectiveness of adversarial defenses being tested using as wide a range of algorithms as possible.

Cite

CITATION STYLE

APA

Ughi, G., Abrol, V., & Tanner, J. (2022, September 1). An empirical study of derivative-free-optimization algorithms for targeted black-box attacks in deep neural networks. Optimization and Engineering. Springer. https://doi.org/10.1007/s11081-021-09652-w

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free