How children learn to understand language meanings: a neural model of adult–child multimodal interactions in real-time

1Citations
Citations of this article
35Readers
Mendeley users who have this article in their library.

Abstract

This article describes a biological neural network model that can be used to explain how children learn to understand language meanings about the perceptual and affective events that they consciously experience. This kind of learning often occurs when a child interacts with an adult teacher to learn language meanings about events that they experience together. Multiple types of self-organizing brain processes are involved in learning language meanings, including processes that control conscious visual perception, joint attention, object learning and conscious recognition, cognitive working memory, cognitive planning, emotion, cognitive-emotional interactions, volition, and goal-oriented actions. The article shows how all of these brain processes interact to enable the learning of language meanings to occur. The article also contrasts these human capabilities with AI models such as ChatGPT. The current model is called the ChatSOME model, where SOME abbreviates Self-Organizing MEaning.

Cite

CITATION STYLE

APA

Grossberg, S. (2023). How children learn to understand language meanings: a neural model of adult–child multimodal interactions in real-time. Frontiers in Psychology, 14. https://doi.org/10.3389/fpsyg.2023.1216479

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free