Elimination of Overtraining by a Mutual Information Network

  • Deco G
  • Finnoff W
  • Zimmermann H
N/ACitations
Citations of this article
20Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The presented learning paradigm uses supervised back-propagation and introduces an extra penalty term in the cost function which controls the complexity and the internal representation of the hidden neurons in an unsupervised form. This term is the mutual information that punishes the learning of noise. This learning algorithm was applied to predict German interest rates by using real world data of the past Excellent results are obtained. The effect of overtraining was eliminated, allowing implementation which finds the solution automatically without interactive strategies such as stopped training and pruning.

Cite

CITATION STYLE

APA

Deco, G., Finnoff, W., & Zimmermann, H. G. (1993). Elimination of Overtraining by a Mutual Information Network. In ICANN ’93 (pp. 744–749). Springer London. https://doi.org/10.1007/978-1-4471-2063-6_208

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free