MOO-DNAS: Efficient Neural Network Design via Differentiable Architecture Search Based on Multi-Objective Optimization

8Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The progress devoted to improving the performance of neural networks has come at a high price in terms of cost and experience. Fortunately, the emergence of Neural Architecture Search improves the speed of network design, but most excellent works only optimize for high accuracy without penalizing the model complexity. In this paper, we propose an efficient CNN architecture search framework, MOO-DNAS, with multi-objective optimization based on differentiable neural architecture search. The main goal is to trade off two competing objectives, classification accuracy and network latency, so that the search algorithm is able to discover an efficient model while maintaining high accuracy. In order to achieve a better implementation, we construct a novel factorized hierarchical search space to support layer variety and hardware friendliness. Furthermore, a robust sampling strategy named 'hard-sampling' is proposed to obtain final structures with higher average performance by keeping the highest scoring operator. Experimental results on the benchmark datasets MINST, CIFAR10 and CIFAR100 demonstrate the effectiveness of the proposed method. The searched architectures, MOO-DNAS-Nets, achieve advanced accuracy with fewer parameters and FLOPs, and the search cost is less than one GPU-day.

Cite

CITATION STYLE

APA

Wei, H., Lee, F., Hu, C., & Chen, Q. (2022). MOO-DNAS: Efficient Neural Network Design via Differentiable Architecture Search Based on Multi-Objective Optimization. IEEE Access, 10, 14195–14207. https://doi.org/10.1109/ACCESS.2022.3148323

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free