How poor is the stimulus? Evaluating hierarchical generalization in neural networks trained on child-directed speech

21Citations
Citations of this article
31Readers
Mendeley users who have this article in their library.
Get full text

Abstract

When acquiring syntax, children consistently choose hierarchical rules over competing non-hierarchical possibilities. Is this preference due to a learning bias for hierarchical structure, or due to more general biases that interact with hierarchical cues in children's linguistic input? We explore these possibilities by training LSTMs and Transformers-two types of neural networks without a hierarchical bias-on data similar in quantity and content to children's linguistic input: text from the CHILDES corpus. We then evaluate what these models have learned about English yes/no questions, a phenomenon for which hierarchical structure is crucial. We find that, though they perform well at capturing the surface statistics of child-directed speech (as measured by perplexity), both model types generalize in a way more consistent with an incorrect linear rule than the correct hierarchical rule. These results suggest that human-like generalization from text alone requires stronger biases than the general sequence-processing biases of standard neural network architectures.

Cite

CITATION STYLE

APA

Yedetore, A., Linzen, T., Frank, R., & McCoy, R. T. (2023). How poor is the stimulus? Evaluating hierarchical generalization in neural networks trained on child-directed speech. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 9370–9393). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-long.521

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free