We describe a neural shift-reduce parsing model for CCG, factored into four unidirectional LSTMs and one bidirectional LSTM. This factorization allows the linearization of the complete parsing history, and results in a highly accurate greedy parser that outperforms all previous beam-search shift-reduce parsers for CCG. By further deriving a globally optimized model using a task-based loss, we improve over the state of the art by up to 2.67% labeled F1.
CITATION STYLE
Xu, W. (2016). LSTM shift-reduce CCG parsing. In EMNLP 2016 - Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 1754–1764). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/d16-1181
Mendeley helps you to discover research relevant for your work.