Temporal synchrony of activation spikes has been proposed as therepresentational code by which the brain segments perceptual patternsinto multiple visual objects or multiple auditory sources. In thischapter we look at the implications of this neuroscientific proposal forlearning and computation in artificial neural networks. Previous workhas defined an artificial neural network model which uses temporalsynchrony to represent and learn about multiple entities (SimpleSynchrony Networks). These networks can store arbitrary amounts ofinformation in their internal state by segmenting their representationof state into arbitrarily many entities. They can also generalize whatthey learn to larger internal states by learning generalizations aboutindividual entities. These claims are empirical demonstrated throughresults on training a Simple Synchrony Network to do syntactic parsingof real natural language sentences.
CITATION STYLE
Henderson, J. (2001). Segmenting State into Entities and Its Implication for Learning (pp. 227–236). https://doi.org/10.1007/3-540-44597-8_17
Mendeley helps you to discover research relevant for your work.