Machine learning and deep learning have gained a lot of attention from researchers because of their promising predictive performance and the availability of extensive high-dimensional data and high-performance computational hardware. However, the performance of these algorithms is susceptible to the choice of hyperparameters, while optimizing these hyperparameters is usually computationally expensive. For this reason, this study proposed a novel sequential search algorithm using decision tree regression as the surrogate function. In each iteration of this algorithm, several new combinations of hyperparameters were selected from a few best-performed leaves of the decision tree, called Iterative Decision Tree (IDT). Our approach could reduce the computational time from repetitive training the surrogate function compared to conventional sequential search algorithms and enable parallel computing. To confirm the effectiveness of the proposed algorithm, it was compared with six popular benchmark optimization algorithms, including Grid Search, Random Search, Bayesian Optimization, Random Forest, Tree-Structured Parzen Estimation, and Genetic Algorithm. The comparison was examined by optimizing three benchmark nonconvex functions, and hyperparameter tuning of two machine learning algorithms (Support Vector Machine and Random Forest) and two deep learning models (Autoencoder and Convolutional Neural Networks). As a result, the proposed algorithm achieved competitive performance with high stability in addition to the feature importance metrics.
CITATION STYLE
Saum, N., Sugiura, S., & Piantanakulchai, M. (2022). Hyperparameter Optimization Using Iterative Decision Tree (IDT). IEEE Access, 10, 106812–106827. https://doi.org/10.1109/ACCESS.2022.3212387
Mendeley helps you to discover research relevant for your work.