A Comprehensive Comparison of Neural Networks as Cognitive Models of Inflection

1Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.

Abstract

Neural networks have long been at the center of a debate around the cognitive mechanism by which humans process inflectional morphology. This debate has gravitated into NLP by way of the question: Are neural networks a feasible account for human behavior in morphological inflection? We address that question by measuring the correlation between human judgments and neural network probabilities for unknown word inflections. We test a larger range of architectures than previously studied on two important tasks for the cognitive processing debate: English past tense, and German number inflection. We find evidence that the Transformer may be a better account of human behavior than LSTMs on these datasets, and that LSTM features known to increase inflection accuracy do not always result in more human-like behavior.

Cite

CITATION STYLE

APA

Wiemerslage, A., Dudy, S., & Kann, K. (2022). A Comprehensive Comparison of Neural Networks as Cognitive Models of Inflection. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022 (pp. 1933–1945). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.emnlp-main.126

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free