Data-defined kernels for parse reranking derived from probabilistic models

11Citations
Citations of this article
88Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Previous research applying kernel methods to natural language parsing have focussed on proposing kernels over parse trees, which are hand-crafted based on domain knowledge and computational considerations. In this paper we propose a method for defining kernels in terms of a probabilistic model of parsing. This model is then trained, so that the parameters of the probabilistic model reflect the generalizations in the training data. The method we propose then uses these trained parameters to define a kernel for reranking parse trees. In experiments, we use a neural network based statistical parser as the probabilistic model, and use the resulting kernel with the Voted Perceptron algorithm to rerank the top 20 parses from the probabilistic model. This method achieves a significant improvement over the accuracy of the probabilistic model. © 2005 Association for Computational Linguistics.

Cite

CITATION STYLE

APA

Henderson, J., & Titov, I. (2005). Data-defined kernels for parse reranking derived from probabilistic models. In ACL-05 - 43rd Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (pp. 181–188). Association for Computational Linguistics (ACL). https://doi.org/10.3115/1219840.1219863

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free