Auto-Ensemble: An Adaptive Learning Rate Scheduling Based Deep Learning Model Ensembling

27Citations
Citations of this article
37Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Ensembling deep learning models is a shortcut to promote its implementation in new scenarios, which can avoid tuning neural networks, losses and training algorithms from scratch. However, it is difficult to collect sufficient accurate and diverse models through once training. This paper proposes Auto-Ensemble (AE) to collect checkpoints of deep learning model and ensemble them automatically by adaptive learning rate scheduling algorithm. The advantage of this method is to make the model converge to various local optima by scheduling the learning rate in once training. When the number of local optimal solutions tends to be saturated, all the collected checkpoints are used for ensemble. Our method is universal, it can be applied to various scenarios. Experiment results on multiple datasets and neural networks demonstrate it is effective and competitive, especially on few-shot learning. Besides, we proposed a method to measure the distance among models. Then we can ensure the accuracy and diversity of collected models.

Cite

CITATION STYLE

APA

Yang, J., & Wang, F. (2020). Auto-Ensemble: An Adaptive Learning Rate Scheduling Based Deep Learning Model Ensembling. IEEE Access, 8, 217499–217509. https://doi.org/10.1109/ACCESS.2020.3041525

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free