AU Understanding: Pleaseconfirmthatallheadinglevelsarerepresentedcorrectly speech requires mapping fleeting and often ambiguous : soundwaves to meaning. While humans are known to exploit their capacity to contextualize to facilitate this process, how internal knowledge is deployed online remains an open question. Here, we present a model that extracts multiple levels of information from continuous speech online. The model applies linguistic and nonlinguistic knowledge to speech processing, by periodically generating top-down predictions and incorporating bottom-up incoming evidence in a nested temporal hierarchy. We show that a nonlinguistic context level provides semantic predictions informed by sensory inputs, which are crucial for disambiguating among multiple meanings of the same word. The explicit knowledge hierarchy of the model enables a more holistic account of the neurophysiological responses to speech compared to using lexical predictions generated by a neural network language model (GPT-2). We also show that hierarchical predictions reduce peripheral processing via minimizing uncertainty and prediction error. With this proof-of-concept model, we demonstrate that the deployment of hierarchical predictions is a possible strategy for the brain to dynamically utilize structured knowledge and make sense of the speech input.
CITATION STYLE
Su, Y., MacGregor, L. J., Olasagasti, I., & Giraud, A. L. (2023). A deep hierarchy of predictions enables online meaning extraction in a computational model of human speech comprehension. PLoS Biology, 21(3 March). https://doi.org/10.1371/journal.pbio.3002046
Mendeley helps you to discover research relevant for your work.