Error weighting in Artificial Neural Networks learning interpreted as a metaplasticity model

11Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Many Artificial Neural Networks design algorithms or learning methods imply the minimization of an error objective function. During learning, weight values are updated following a strategy that tends to minimize the final mean error in the Network performance. Weight values are classically seen as a representation of the synaptic weights in biological neurons and their ability to change its value could be interpreted as artificial plasticity inspired by this biological property of neurons. In such a way, metaplasticity is interpreted in this paper as the ability to change the efficiency of artificial plasticity giving more relevance to weight updating of less frequent activations and resting relevance to frequent ones. Modeling this interpretation in the training phase, the hypothesis of an improved training is tested in the Multilayer Perceptron with Backpropagation case. The results show a much more efficient training maintaining the Artificial Neural Network performance. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Andina, D., Jevtiè, A., Marcano, A., & Barrón Adame, J. M. (2007). Error weighting in Artificial Neural Networks learning interpreted as a metaplasticity model. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4527 LNCS, pp. 244–252). Springer Verlag. https://doi.org/10.1007/978-3-540-73053-8_24

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free