Recent studies of biological auditory processing have revealed that sophisticated spectrotemporal analyses are performed by central auditory systems of various animals. The analysis is typically well matched with the statistics of relevant natural sounds, suggesting that it produces an optimal representation of the animal's acoustic biotope. We address this topic using simulated neurons that learn an optimal representation of a speech corpus. As input, the neurons receive a spectrographic representation of sound produced by a peripheral auditory model. The output representation is deemed optimal when the responses of the neurons are maximally sparse. Following optimization, the simulated neurons are similar to real neurons in many respects. Most notably, a given neuron only analyzes the input over a localized region of time and frequency. In addition, multiple subregions either excite or inhibit the neuron, together producing selectivity to spectral and temporal modulation patterns. This suggests that the brain's solution is particularly well suited for coding natural sound; therefore, it may prove useful in the design of new computational methods for processing speech.
CITATION STYLE
Klein, D. J., König, P., & Körding, K. P. (2003). Sparse spectrotemporal coding of sounds. Eurasip Journal on Applied Signal Processing, 2003(7), 659–667. https://doi.org/10.1155/S1110865703303051
Mendeley helps you to discover research relevant for your work.