The storage and short-term memory capacities of recurrent neural networks of spiking neurons are investigated. We demonstrate that it is possible to process online many superimposed streams of input. This is despite the fact that the stored information is spread throughout the network. We show that simple output structures are powerful enough to extract the diffuse information from the network. The dimensional blow up, which is crucial in kernel methods, is efficiently achieved by the dynamics of the network itself. © Springer-Verlag Berlin Heidelberg 2003.
CITATION STYLE
Mayor, J., & Gerstner, W. (2003). Online processing of multiple inputs in a sparsely-connected recurrent neural network. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2714, 839–845. https://doi.org/10.1007/3-540-44989-2_100
Mendeley helps you to discover research relevant for your work.