Word frequency does not predict grammatical knowledge in language models

11Citations
Citations of this article
91Readers
Mendeley users who have this article in their library.

Abstract

Neural language models learn, to varying degrees of accuracy, the grammatical properties of natural languages. In this work, we investigate whether there are systematic sources of variation in the language models' accuracy. Focusing on subject-verb agreement and reflexive anaphora, we find that certain nouns are systematically understood better than others, an effect which is robust across grammatical tasks and different language models. Surprisingly, we find that across four orders of magnitude, corpus frequency is unrelated to a noun's performance on grammatical tasks. Finally, we find that a novel noun's grammatical properties can be few-shot learned from various types of training data. The results present a paradox: there should be less variation in grammatical performance than is actually observed.

Cite

CITATION STYLE

APA

Yu, C., Sie, R., Tedeschi, N., & Bergen, L. (2020). Word frequency does not predict grammatical knowledge in language models. In EMNLP 2020 - 2020 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference (pp. 4040–4054). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.emnlp-main.331

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free