On appropriate refractoriness and weight increment in incremental learning

5Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Neural networks are able to learn more patterns with the incremental learning than with the correlative learning. The incremental learning is a method to compose an associate memory using a chaotic neural network. The capacity of the network is found to increase along with its size which is the number of the neurons in the network and to be larger than the one with correlative learning. In former work, the capacity was over the direct proportion to the network size with suitable pairs of the refractory parameter and the learning parameter. In this paper, the refractory parameter and the learning parameter are investigated through the computer simulations changing these parameters. Through the computer simulations, it turns out that the appropriate parameters lie near the origin with some relation between them. © 2013 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Deguchi, T., Fukuta, J., & Ishii, N. (2013). On appropriate refractoriness and weight increment in incremental learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7824 LNCS, pp. 1–9). Springer Verlag. https://doi.org/10.1007/978-3-642-37213-1_1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free