We describe the theoretical formulation of a learning algorithm in a model of the primary visual cortex (V1) and present results of the efficiency of this algorithm by comparing it to the SparseNet algorithm 1. As the SparseNet algorithm, it is based on a model of signal synthesis as a Linear Generative Model but differs in the efficiency criteria for the representation. This learning algorithm is in fact based on an efficiency criteria based on the Occam razor: for a similar quality, the shortest representation should be privileged. This inverse problem is NP-complete and we propose here a greedy solution which is based on the architecture and nature of neural computations 2). It proposes that the supra-threshold neural activity progressively removes redundancies in the representation based on a correlation-based inhibition and provides a dynamical implementation close to the concept of neural assemblies from Hebb 3). We present here results of simulation of this network with small natural images (available at http://www.incm.cnrs-mrs.fr/LaurentPerrinet/SparseHebbianLearning) and compare it to the Sparsenet solution. Extending it to realistic images and to the NEST simulator http://www.nest-initiative.org/, we show that this learning algorithm based on the properties of neural computations produces adaptive and efficient representations in V1. 1. Olshausen B, Field DJ: Sparse coding with an overcomplete basis set: A strategy employed by V1? Vision Res 1997, 37:3311-3325. 2. Perrinet L: Feature detection using spikes: the greedy approach. J Physiol Paris 2004, 98(4-6):530-539. 3. Hebb DO: The organization of behavior. Wiley, New York; 1949.
CITATION STYLE
Perrinet, L. (2007). On efficient sparse spike coding schemes for learning natural scenes in the primary visual cortex. BMC Neuroscience, 8(S2). https://doi.org/10.1186/1471-2202-8-s2-p206
Mendeley helps you to discover research relevant for your work.