Incremental, Predictive Parsing with Psycholinguistically Motivated Tree-Adjoining Grammar

54Citations
Citations of this article
132Readers
Mendeley users who have this article in their library.

Abstract

Psycholinguistic research shows that key properties of the human sentence processor are incrementality, connectedness (partial structures contain no unattached nodes), and prediction (upcoming syntactic structure is anticipated). There is currently no broad-coverage parsing model with these properties, however. In this article, we present the first broad-coverage probabilistic parser for PLTAG, a variant of TAG that supports all three requirements. We train our parser on a TAG-transformed version of the Penn Treebank and show that it achieves performance comparable to existing TAG parsers that are incremental but not predictive. We also use our PLTAG model to predict human reading times, demonstrating a better fit on the Dundee eyetracking corpus than a standard surprisal model. © 2013 Association for Computational Linguistics.

Cite

CITATION STYLE

APA

Demberg, V., Keller, F., & Koller, A. (2013). Incremental, Predictive Parsing with Psycholinguistically Motivated Tree-Adjoining Grammar. Computational Linguistics, 39(4), 1025–1066. https://doi.org/10.1162/COLI_a_00160

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free