Structural learning of neural network for continuous valued output: Effect of penalty term to hidden units

5Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Multilayer feed forward networks with back propagation learning are widely used for function approximation but the learned networks rarely reveal the input output relationship explicitly. Structural learning methods are proposed to optimize the network topology as well as to add interpretation to its internal behaviour. Effective structural learning approaches for optimization and internal interpretation of the neural networks like structural learning with forgetting (SLF) or fast integration learning (FIL) have been proved useful for problems with binary outputs. In this work a new structural learning method based on modification of SLF and FIL has been proposed for problems with continuous valued outputs. The effectiveness of the proposed learning method has been demonstrated by simulation experiments with continuous valued functions. © Springer-Verlag Berlin Heidelberg 2004.

Cite

CITATION STYLE

APA

Chakraborty, B., & Manabe, Y. (2004). Structural learning of neural network for continuous valued output: Effect of penalty term to hidden units. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 3316, 599–605. https://doi.org/10.1007/978-3-540-30499-9_92

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free