Incremental learning with a stopping criterion experimental results

4Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We recently proposed a new incremental procedure for supervised learning with noisy data. Each step consists in adding to the current network a new unit (or small 2- or 3-nenron networks) which is trained to learn the error of the network. The incremental step is repeated until the error of the current network can be considered as a noise. The stopping criterion is very simple and can be directly deduced from a statistical test on the estimated parameters of the new unit. In this paper, we develop experimental comparison between few alternatives of the incremental algorithm and classic backpropagation algorithm, according to convergence, speed of convergence and optimal number of neurons. Experimental results point out the efficacy of this new incremental scheme especially to avoid spurious minima and to design a network with a well-suited size. The number of basic operations is also decreased and gives an average gain on convergence speed of about 20%.

Cite

CITATION STYLE

APA

Chentouf, R., & Jutten, C. (1995). Incremental learning with a stopping criterion experimental results. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 930, pp. 519–526). Springer Verlag. https://doi.org/10.1007/3-540-59497-3_218

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free