Experienced optimization with reusable directional model for hyper-parameter search

10Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Hyper-parameter selection is a crucial yet difficult issue in machine learning. For this problem, derivative-free optimization has being playing an irreplaceable role. However, derivative-free optimization commonly requires a lot of hyperparameter samples, while each sample could have a high cost for hyper-parameter selection due to the costly evaluation of a learning model. To tackle this issue, in this paper, we propose an experienced optimization approach, i.e., learning how to optimize better from a set of historical optimization processes. From the historical optimization processes on previous datasets, a directional model is trained to predict the direction of the next good hyper-parameter. The directional model is then reused to guide the optimization on learning new datasets. We implement this mechanism within a state-of-the-art derivative-free optimization method SRACOS, and conduct experiments on learning the hyper-parameters of heterogeneous ensembles and neural network architectures. Experimental results verify that the proposed approach can significantly improve the learning accuracy within a limited hyper-parameter sample budget.

Cite

CITATION STYLE

APA

Hu, Y. Q., Yu, Y., & Zhou, Z. H. (2018). Experienced optimization with reusable directional model for hyper-parameter search. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2018-July, pp. 2276–2282). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2018/315

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free