Efficient computation of entropy gradient for semi-supervised conditional random fields

26Citations
Citations of this article
129Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Entropy regularization is a straightforward and successful method of semi-supervised learning that augments the traditional conditional likelihood objective function with an additional term that aims to minimize the predicted label entropy on unlabeled data. It has previously been demonstrated to provide positive results in linear-chain CRFs, but the published method for calculating the entropy gradient requires significantly more computation than supervised CRF training. This paper presents a new derivation and dynamic program for calculating the entropy gradient that is significantly more efficient-having the same asymptotic time complexity as supervised CRF training. We also present efficient generalizations of this method for calculating the label entropy of all sub-sequences, which is useful for active learning, among other applications.

Cite

CITATION STYLE

APA

Mann, G. S., & McCallum, A. (2007). Efficient computation of entropy gradient for semi-supervised conditional random fields. In NAACL-HLT 2007 - Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, Companion Volume: Short Papers (pp. 109–112). Association for Computational Linguistics (ACL). https://doi.org/10.3115/1614108.1614136

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free