Combining Sample Selection and Error-Driven Pruning for Machine Learning of Coreference Rules

33Citations
Citations of this article
94Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Most machine learning solutions to noun phrase coreference resolution recast the problem as a classification task. We examine three potential problems with this reformulation, namely, skewed class distributions, the inclusion of “hard” training instances, and the loss of transitivity inherent in the original coreference relation. We show how these problems can be handled via intelligent sample selection and error-driven pruning of classification rule-sets. The resulting system achieves an F-measure of 69.5 and 63.4 on the MUC-6 and MUC-7 coreference resolution data sets, respectively, surpassing the performance of the best MUC-6 and MUC-7 coreference systems. In particular, the system outperforms the best-performing learning-based coreference system to date.

Cite

CITATION STYLE

APA

Ng, V., & Cardie, C. (2002). Combining Sample Selection and Error-Driven Pruning for Machine Learning of Coreference Rules. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing, EMNLP 2002 (pp. 55–62). Association for Computational Linguistics (ACL). https://doi.org/10.3115/1118693.1118701

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free