Priorless Recurrent Networks Learn Curiously

14Citations
Citations of this article
74Readers
Mendeley users who have this article in their library.

Abstract

Recently, domain-general recurrent neural networks, without explicit linguistic inductive biases, have been shown to successfully reproduce a range of human language behaviours, such as accurately predicting number agreement between nouns and verbs. We show that such networks will also learn number agreement within unnatural sentence structures, i.e. structures that are not found within any natural languages and which humans struggle to process. These results suggest that the models are learning from their input in a manner that is substantially different from human language acquisition, and we undertake an analysis of how the learned knowledge is stored in the weights of the network. We find that while the model has an effective understanding of singular versus plural for individual sentences, there is a lack of a unified concept of number agreement connecting these processes across the full range of inputs. Moreover, the weights handling natural and unnatural structures overlap substantially, in a way that underlines the non-human-like nature of the knowledge learned by the network.

Cite

CITATION STYLE

APA

Mitchell, J., & Bowers, J. S. (2020). Priorless Recurrent Networks Learn Curiously. In COLING 2020 - 28th International Conference on Computational Linguistics, Proceedings of the Conference (pp. 5147–5158). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.coling-main.451

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free