ESAE: Evolutionary Strategy-Based Architecture Evolution

1Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Although deep neural networks (DNNs) play important roles in many fields, the architecture design of DNNs can be challenging due to the difficulty of input data representation, the huge number of parameters and the complex layer relationships. To overcome the obstacles of architecture design, we developed a new method to generate the optimal structure of DNNs, named Evolutionary Strategy-based Architecture Evolution (ESAE), consisting of a bi-level representation and a probability distribution learning approach. The bi-level representation encodes architectures in the gene and parameter levels. The probability distribution learning approach ensures the efficient convergence of the architecture searching process. By using Fashion-MNIST and CIFAR-10, the effectiveness of the proposed ESAS is verified. The evolved DNNs, starting from a trivial initial architecture with one single convolutional layer, achieved the accuracies of 94.48% and 93.49% on Fashion-MNIST and CIFAR-10, respectively, and require remarkably less hardware costs in terms of GPUs and running time, compared with the existing state-of-the-art manual screwed architectures.

Cite

CITATION STYLE

APA

Gu, X., Meng, Z., Liang, Y., Xu, D., Huang, H., Han, X., & Wu, C. (2020). ESAE: Evolutionary Strategy-Based Architecture Evolution. In Communications in Computer and Information Science (Vol. 1159 CCIS, pp. 193–208). Springer. https://doi.org/10.1007/978-981-15-3425-6_16

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free