Shift-Reduce Constituent Parsing with Neural Lookahead Features

  • Liu J
  • Zhang Y
N/ACitations
Citations of this article
83Readers
Mendeley users who have this article in their library.

Abstract

Transition-based models can be fast and accurate for constituent parsing. Compared with chart-based models, they leverage richer features by extracting history information from a parser stack, which consists of a sequence of non-local constituents. On the other hand, during incremental parsing, constituent information on the right hand side of the current word is not utilized, which is a relative weakness of shift-reduce parsing. To address this limitation, we leverage a fast neural model to extract lookahead features. In particular, we build a bidirectional LSTM model, which leverages full sentence information to predict the hierarchy of constituents that each word starts and ends. The results are then passed to a strong transition-based constituent parser as lookahead features. The resulting parser gives 1.3% absolute improvement in WSJ and 2.3% in CTB compared to the baseline, giving the highest reported accuracies for fully-supervised parsing.

Cite

CITATION STYLE

APA

Liu, J., & Zhang, Y. (2017). Shift-Reduce Constituent Parsing with Neural Lookahead Features. Transactions of the Association for Computational Linguistics, 5, 45–58. https://doi.org/10.1162/tacl_a_00045

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free