Using cluster-based sampling to select initial training set for active learning in text classification

46Citations
Citations of this article
49Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We propose a method of selecting initial training examples for active learning so that it can reach high performance faster with fewer further queries. Our method divides the unlabeled examples into clusters of similar ones and then selects from each cluster the most representative example which is the one closest to the cluster’s centroid. These representative examples are labeled by the user and become the members of the initial training set. We also promote inclusion of what we call model examples in the initial training set. Although the model examples which are in fact the centroids of the clusters are not real examples, their contribution to enhancement of classification accuracy is significant because they represent a group of similar examples so well. Experiments with various text data sets have shown that the active learner starting from the initial training set selected by our method reaches higher accuracy faster than that starting from randomly generated initial training set.

Cite

CITATION STYLE

APA

Kang, J., Ryu, K. R., & Kwon, H. C. (2004). Using cluster-based sampling to select initial training set for active learning in text classification. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 3056, pp. 384–388). Springer Verlag. https://doi.org/10.1007/978-3-540-24775-3_46

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free