Predictive ensemble decoding of acoustical features explains context-dependent receptive fields

10Citations
Citations of this article
101Readers
Mendeley users who have this article in their library.

Abstract

A primary goal of auditory neuroscience is to identify the sound features extracted and represented by auditory neurons. Linear encoding models, which describe neural responses as a function of the stimulus, have been primarily used for this purpose. Here, we provide theoretical arguments and experimental evidence in support of an alternative approach, based on decoding the stimulus from the neural response. We used a Bayesian normative approach to predict the responses of neurons detecting relevant auditory features, despite ambiguities and noise. We compared the model predictions to recordings from the primary auditory cortex of ferrets and found that: (1) the decoding filters of auditory neurons resemble the filters learned from the statistics of speech sounds; (2) the decoding model captures the dynamics of responses better than a linear encoding model of similar complexity; and (3) the decoding model accounts for the accuracy with which the stimulus is represented in neural activity, whereas linear encoding model performs very poorly. Most importantly, our model predicts that neuronal responses are fundamentally shaped by “explaining away,” a divisive competition between alternative interpretations of the auditory scene.

Cite

CITATION STYLE

APA

Yildiz, I. B., Mesgarani, N., & Deneve, S. (2016). Predictive ensemble decoding of acoustical features explains context-dependent receptive fields. Journal of Neuroscience, 36(49), 12338–12350. https://doi.org/10.1523/JNEUROSCI.4648-15.2016

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free