A Neural Network Model of Lexical-Semantic Competition During Spoken Word Recognition

3Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Visual world studies show that upon hearing a word in a target-absent visual context containing related and unrelated items, toddlers and adults briefly direct their gaze toward phonologically related items, before shifting toward semantically and visually related ones. We present a neural network model that processes dynamic unfolding phonological representations of words and maps them to static internal lexical, semantic, and visual representations. The model, trained on representations derived from real corpora, simulates this early phonological over semantic/visual preference. Our results support the hypothesis that incremental unfolding of a spoken word is in itself sufficient to account for the transient preference for phonological competitors over both unrelated and semantically and visually related ones. Phonological representations mapped dynamically in a bottom-up fashion to semantic-visual representations capture the early phonological preference effects reported in visual world tasks. The semantic visual preference typically observed later in such a task does not require top-down feedback from a semantic or visual system.

Cite

CITATION STYLE

APA

Duta, M., & Plunkett, K. (2021). A Neural Network Model of Lexical-Semantic Competition During Spoken Word Recognition. Frontiers in Human Neuroscience, 15. https://doi.org/10.3389/fnhum.2021.700281

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free