Acquiring language from speech by learning to remember and predict

5Citations
Citations of this article
80Readers
Mendeley users who have this article in their library.

Abstract

Classical accounts of child language learning invoke memory limits as a pressure to discover sparse, language-like representations of speech, while more recent proposals stress the importance of prediction for language learning. In this study, we propose a broad-coverage unsupervised neural network model to test memory and prediction as sources of signal by which children might acquire language directly from the perceptual stream. Our model embodies several likely properties of real-time human cognition: it is strictly incremental, it encodes speech into hierarchically organized labeled segments, it allows interactive top-down and bottom-up information flow, it attempts to model its own sequence of latent representations, and its objective function only recruits local signals that are plausibly supported by human working memory capacity. We show that much phonemic structure is learnable from unlabeled speech on the basis of these local signals. We further show that remembering the past and predicting the future both contribute to the linguistic content of acquired representations, and that these contributions are at least partially complementary.

References Powered by Scopus

Long Short-Term Memory

78194Citations
N/AReaders
Get full text

Neural networks and physical systems with emergent collective computational abilities.

13792Citations
N/AReaders
Get full text

The free-energy principle: A unified brain theory?

4717Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Towards unsupervised phone and word segmentation using self-supervised vector-quantized neural networks

14Citations
N/AReaders
Get full text

Unsupervised Word Segmentation using K Nearest Neighbors

4Citations
N/AReaders
Get full text

Character-based PCFG Induction for Modeling the Syntactic Acquisition of Morphologically Rich Languages

4Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Shain, C., & Elsner, M. (2020). Acquiring language from speech by learning to remember and predict. In CoNLL 2020 - 24th Conference on Computational Natural Language Learning, Proceedings of the Conference (pp. 195–214). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.conll-1.15

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 19

59%

Researcher 6

19%

Professor / Associate Prof. 4

13%

Lecturer / Post doc 3

9%

Readers' Discipline

Tooltip

Computer Science 27

68%

Linguistics 10

25%

Neuroscience 2

5%

Social Sciences 1

3%

Save time finding and organizing research with Mendeley

Sign up for free