Memory-bounded neural incremental parsing for Psycholinguistic prediction

8Citations
Citations of this article
68Readers
Mendeley users who have this article in their library.

Abstract

Syntactic surprisal has been shown to have an effect on human sentence processing, and can be calculated from prefix probabilities of generative incremental parsers. Recent state-of the- art incremental generative neural parsers are able to produce accurate parses and surprisal values, but have unbounded stack memory, which may be used by the neural parser to maintain explicit in-order representations of all previously parsed words, inconsistent with results of human memory experiments. In contrast, humans seem to have a bounded working memory, demonstrated by inhibited performance on word recall in multi-clause sentences (Bransford and Franks, 1971), and on center-embedded sentences (Miller and Isard, 1964). Bounded statistical parsers exist, but are less accurate than neural parsers in predicting reading times. This paper describes a neural incremental generative parser that is able to provide accurate surprisal estimates and can be constrained to use a bounded stack. Results show that accuracy gains of neural parsers can be reliably extended to psycholinguistic modeling without risk of distortion due to unbounded working memory.

Cite

CITATION STYLE

APA

Jin, L., & Schuler, W. (2020). Memory-bounded neural incremental parsing for Psycholinguistic prediction. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 48–61). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.iwpt-1.6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free