Slow feature analysis with spiking neurons and its application to audio stimuli

3Citations
Citations of this article
39Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Extracting invariant features in an unsupervised manner is crucial to perform complex computation such as object recognition, analyzing music or understanding speech. While various algorithms have been proposed to perform such a task, Slow Feature Analysis (SFA) uses time as a means of detecting those invariants, extracting the slowly time-varying components in the input signals. In this work, we address the question of how such an algorithm can be implemented by neurons, and apply it in the context of audio stimuli. We propose a projected gradient implementation of SFA that can be adapted to a Hebbian like learning rule dealing with biologically plausible neuron models. Furthermore, we show that a Spike-Timing Dependent Plasticity learning rule, shaped as a smoothed second derivative, implements SFA for spiking neurons. The theory is supported by numerical simulations, and to illustrate a simple use of SFA, we have applied it to auditory signals. We show that a single SFA neuron can learn to extract the tempo in sound recordings.

Cite

CITATION STYLE

APA

Bellec, G., Galtier, M., Brette, R., & Yger, P. (2016). Slow feature analysis with spiking neurons and its application to audio stimuli. Journal of Computational Neuroscience, 40(3), 317–329. https://doi.org/10.1007/s10827-016-0599-3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free