Improving neural parsing by disentangling model combination and reranking effects

29Citations
Citations of this article
104Readers
Mendeley users who have this article in their library.

Abstract

Recent work has proposed several generative neural models for constituency parsing that achieve state-of-the-art results. Since direct search in these generative models is difficult, they have primarily been used to rescore candidate outputs from base parsers in which decoding is more straightforward. We first present an algorithm for direct search in these generative models. We then demonstrate that the rescoring results are at least partly due to implicit model combination rather than reranking effects. Finally, we show that explicit model combination can improve performance even further, resulting in new state-of-the-art numbers on the PTB of 94.25 F1 when training only on gold data and 94.66 F1 when using external data.

Cite

CITATION STYLE

APA

Fried, D., Stern, M., & Klein, D. (2017). Improving neural parsing by disentangling model combination and reranking effects. In ACL 2017 - 55th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers) (Vol. 2, pp. 161–166). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/P17-2025

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free