Expected F-measure training for shift-reduce parsing with recurrent neural networks

15Citations
Citations of this article
79Readers
Mendeley users who have this article in their library.

Abstract

We present expected F-measure training for shift-reduce parsing with RNNs, which enables the learning of a global parsing model optimized for sentence-level F1. We apply the model to CCG parsing, where it improves over a strong greedy RNN baseline, by 1.47% F1, yielding state-of-the-art results for shift-reduce CCG parsing.

Cite

CITATION STYLE

APA

Xu, W., Auli, M., & Clark, S. (2016). Expected F-measure training for shift-reduce parsing with recurrent neural networks. In 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT 2016 - Proceedings of the Conference (pp. 210–220). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/n16-1025

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free