Anti-bandit Neural Architecture Search for Model Defense

13Citations
Citations of this article
61Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Deep convolutional neural networks (DCNNs) have dominated as the best performers in machine learning, but can be challenged by adversarial attacks. In this paper, we defend against adversarial attacks using neural architecture search (NAS) which is based on a comprehensive search of denoising blocks, weight-free operations, Gabor filters and convolutions. The resulting anti-bandit NAS (ABanditNAS) incorporates a new operation evaluation measure and search process based on the lower and upper confidence bounds (LCB and UCB). Unlike the conventional bandit algorithm using UCB for evaluation only, we use UCB to abandon arms for search efficiency and LCB for a fair competition between arms. Extensive experiments demonstrate that ABanditNAS is about twice as fast as the state-of-the-art NAS method, while achieving an 8.73 % improvement over prior arts on CIFAR-10 under PGD-7.

Cite

CITATION STYLE

APA

Chen, H., Zhang, B., Xue, S., Gong, X., Liu, H., Ji, R., & Doermann, D. (2020). Anti-bandit Neural Architecture Search for Model Defense. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12358 LNCS, pp. 70–85). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-58601-0_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free