Inward relearning: A step towards long-term memory

N/ACitations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Artificial neural networks are often used as models of biological memory because they share with the latter properties like generalisation, distributed representation, robustness, fault tolerance. However, they operate on a short-term scale and can therefore only be appropriate models of short-term memory. This limitation is known as the so-called catastrophic interference: when a new set of data is learned, the network totally forgets the previously trained sets. To palliate these restrictions, we have developed an algorithm which enables some types of neural network to behave better in the longer term. It requires local networks where the representation takes the form of prototypes (as example, we utilize a RBF network). These prototypes model the previously learned input subspaces. During the presentation of the new input subspace, they can be inwardly manipulated such as to enable a "relearning" of a part of the internal model. In order to show the long-term capabilities of our heuristic, we compare the results of simulations with those obtained by a multi-layer network in the case of a typical psychophysical experiment.

Cite

CITATION STYLE

APA

Wacquant, S., & Joublin, F. (1996). Inward relearning: A step towards long-term memory. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1112 LNCS, pp. 887–892). Springer Verlag. https://doi.org/10.1007/3-540-61510-5_149

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free