Incremental recurrent neural network dependency parser with search-based discriminative training

16Citations
Citations of this article
82Readers
Mendeley users who have this article in their library.

Abstract

We propose a discriminatively trained recurrent neural network (RNN) that predicts the actions for a fast and accurate shift-reduce dependency parser. The RNN uses its output-dependent model structure to compute hidden vectors that encode the preceding partial parse, and uses them to estimate probabilities of parser actions. Unlike a similar previous generative model (Henderson and Titov, 2010), the RNN is trained discriminatively to optimize a fast beam search. This beam search prunes after each shift action, so we add a correctness probability to each shift action and train this score to discriminate between correct and incorrect sequences of parser actions. We also speed up parsing time by caching computations for frequent feature combinations, including during training, giving us both faster training and a form of backoff smoothing. The resulting parser is over 35 times faster than its generative counterpart with nearly the same accuracy, producing state-of-art dependency parsing results while requiring minimal feature engineering.

Cite

CITATION STYLE

APA

Yazdani, M., & Henderson, J. (2015). Incremental recurrent neural network dependency parser with search-based discriminative training. In CoNLL 2015 - 19th Conference on Computational Natural Language Learning, Proceedings (pp. 142–152). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/k15-1015

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free