Decision trees have been widely used in many data mining applications due to their interpretable representation. However, learning an accurate decision tree model often requires a large amount of labeled training data. Labeling data is costly and time consuming. In this paper, we study learning decision trees with lesser labeling cost from two perspectives: data quality and data quantity. At each step of active learning process we learn a random forest and then use it to label a large quantity of unlabeled data. To overcome the large tree size caused by the machine labeling, we generate weighted (soft) labeled data using the prediction confidence of the labeling classifier. Empirical studies show that our method can significantly improve active learning in terms of labeling cost for decision tree learning, and the improvement does not sacrifice the size of decision trees. © 2009 Springer Berlin Heidelberg.
CITATION STYLE
Su, J., Jelber, S. S., Matwin, S., & Huang, J. (2009). Active learning with automatic soft labeling for induction of decision trees. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5549 LNAI, pp. 241–244). https://doi.org/10.1007/978-3-642-01818-3_33
Mendeley helps you to discover research relevant for your work.