Training conditional random fields with unlabeled data and limited number of labeled examples

0Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Conditional random fields is a probabilistic approach which has been applied to sequence labeling task achieving good performance. We attempt to extend the model so that human effort in preparing labeled training examples can be reduced by considering unlabeled data. Instead of maximizing the conditional likelihood, we aim at maximizing the likelihood of the observation of the sequences from both of the labeled and unlabeled data. We have conducted extensive experiments in two different data sets to evaluate the performance. The experimental results show that our model learned from both labeled and unlabeled data has a better performance over the model learned by only considering labeled training examples. © Springer-Verlag Berlin Heidelberg 2006.

Cite

CITATION STYLE

APA

Wong, T. L., & Lam, W. (2006). Training conditional random fields with unlabeled data and limited number of labeled examples. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 3930 LNAI, pp. 477–486). https://doi.org/10.1007/11739685_50

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free