Language models (LMs) have been used in cognitive modeling as well as engineering studies-they compute information-theoretic complexity metrics that simulate humans' cognitive load during reading. This study highlights a limitation of modern neural LMs as the model of choice for this purpose: there is a discrepancy between their context access capacities and that of humans. Our results showed that constraining the LMs' context access improved their simulation of human reading behavior. We also showed that LM-human gaps in context access were associated with specific syntactic constructions; incorporating syntactic biases into LMs' context access might enhance their cognitive plausibility.
CITATION STYLE
Kuribayashi, T., Oseki, Y., Brassard, A., & Inui, K. (2022). Context Limitations Make Neural Language Models More Human-Like. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022 (pp. 10421–10436). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.emnlp-main.712
Mendeley helps you to discover research relevant for your work.