We present expected F-measure training for shift-reduce parsing with RNNs, which enables the learning of a global parsing model optimized for sentence-level F1. We apply the model to CCG parsing, where it improves over a strong greedy RNN baseline, by 1.47% F1, yielding state-of-the-art results for shift-reduce CCG parsing.
CITATION STYLE
Xu, W., Auli, M., & Clark, S. (2016). Expected F-measure training for shift-reduce parsing with recurrent neural networks. In 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT 2016 - Proceedings of the Conference (pp. 210–220). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/n16-1025
Mendeley helps you to discover research relevant for your work.