In real learning paradigms like pavlovian conditioning, several modes of learning are associated, including generalization from cues and integration of specific cases in their context. Associative memories have been shown to be interesting neuronal models to learn quickly specific cases but they are hardly used in realistic applications because of their limited storage capacities resulting in interference when too many examples are considered. Inspired by biological considerations, we propose a modular model of associative memory including mechanisms to manipulate properly multimodal inputs and to detect and manage interference. This paper reports experiments that demonstrate the good behavior of the model in a wide series of simulations and discusses its impact both in machine learning and in biological modeling.
CITATION STYLE
Kassab, R., & Alexandre, F. (2017). A modular network architecture resolving memory interference through inhibition. In Studies in Computational Intelligence (Vol. 669, pp. 407–422). Springer Verlag. https://doi.org/10.1007/978-3-319-48506-5_21
Mendeley helps you to discover research relevant for your work.