k-NN for local probability estimation in generative parsing models

1Citations
Citations of this article
67Readers
Mendeley users who have this article in their library.

Abstract

We describe a history-based generative parsing model which uses a k-nearest neighbour (k-NN) technique to estimate the model's parameters. Taking the output of a base n-best parser we use our model to re-estimate the log probability of each parse tree in the n-best list for sentences from the Penn Wall Street Journal treebank. By further decomposing the local probability distributions of the base model, enriching the set of conditioning features used to estimate the model's parameters, and using k-NN as opposed to the Witten-Bell estimation of the base model, we achieve an f-score of 89.2%, representing a 4% relative decrease in f-score error over the 1-best output of the base parser.

Cite

CITATION STYLE

APA

Hogan, D. (2005). k-NN for local probability estimation in generative parsing models. In IWPT 2005 - Proceedings of the 9th International Workshop on Parsing Technologies (pp. 202–203). Association for Computational Linguistics (ACL). https://doi.org/10.3115/1654494.1654522

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free