Learning of lateral connections for representational invariant recognition

0Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We study an artificial neural network that learns the invariance properties of objects from data. We start with a bag-of-features encoding of a specific object and repeatedly show the object in different transformations. The network then learns unsupervised from the data what the possible transformations are and what feature arrangements are typical for the object shown. The information about transformations and feature arrangements is hereby represented by a lateral network of excitatory connections among units that control the information exchange between an input and a down-stream neural layer. We build up on earlier work in this direction that kept a close relation to novel anatomical and physiological data on the cortical architecture and on its information processing and learning. At the same time we show, based on a new synaptic plasticity rules, that learning results in a strong increase of object finding rates in both artificial and more realistic experiments. © 2010 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Keck, C., & Lücke, J. (2010). Learning of lateral connections for representational invariant recognition. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6354 LNCS, pp. 21–30). https://doi.org/10.1007/978-3-642-15825-4_3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free