Evaluation of robustness and performance of Early Stopping Rules with Multi Layer Perceptrons

21Citations
Citations of this article
21Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we evaluate different Early Stopping Rules (ESR) and their combinations for stopping the training of Multi Layer Perceptrons (MLP) using the stochastic gradient descent, also known as online error backpropagation, before reaching a predefined maximum number of epochs. We focused our evaluation to classification tasks, as most of the works use MLP for classification instead of regression. Early stopping is important for two reasons. On one hand it prevents overfitting and on the other hand it can dramatically reduce the training time. Today, there exists an increasing amount of applications involving unsupervised and automatic training like i.e. in ensemble learning, where automatic stopping rules are necessary for keeping training time low. Current literature is not so specific about endorsing which rule to use, when to use it or what its robustness is. Therefore this issue is revisited in this paper. We tested on PROBEN1, a collection of UCI databases and the MNIST. ©2009 IEEE.

Cite

CITATION STYLE

APA

Lodwich, A., Rangoni, Y., & Breuel, T. (2009). Evaluation of robustness and performance of Early Stopping Rules with Multi Layer Perceptrons. In Proceedings of the International Joint Conference on Neural Networks (pp. 1877–1884). https://doi.org/10.1109/IJCNN.2009.5178626

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free