Language acquisition: The emergence of words from multimodal input

2Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Young infants learn words by detecting patterns in the speech signal and by associating these patterns to stimuli provided by non-speech modalities (such as vision). In this paper, we discuss a computational model that is able to detect and build word-like representations on the basis of multimodal input data. Learning of words (and word-like entities) takes place within a communicative loop between a 'carer' and the 'learner'. Experiments carried out on three different European languages (Finnish, Swedish, and Dutch) show that a robust word representation can be learned in using approximately 50 acoustic tokens (examples) of that word. The model is inspired by the memory structure that is assumed functional for human speech processing. © 2008 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Ten Bosch, L., & Boves, L. (2008). Language acquisition: The emergence of words from multimodal input. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5246 LNAI, pp. 261–268). https://doi.org/10.1007/978-3-540-87391-4_34

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free