Target learning: A novel framework to mine significant dependencies for unlabeled data

11Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

To mine significant dependencies among predictiveattributes, much work has been carried out to learn Bayesian netwrok classifiers (BNC T s) from labeled training data set T. However, if BNC T does not capture the “right” dependencies that would be most relevant to unlabeled testing instance, that will result in performance degradation. To address this issue we propose a novel framework, called target learning, that takes each unlabeled testing instance as a target and builds an “unstable” Bayesian model BNC P for it. To make BNC P and BNC T complementary to each other and work efficiently in combination, the same learning strategy is applied to build them. Experimental comparison on 32 large data sets from UCI machine learning repository shows that, for BNCs with different degrees of dependence target learning always helps improve the generalization performance with minimal additional computation.

Cite

CITATION STYLE

APA

Wang, L., Chen, S., & Mammadov, M. (2018). Target learning: A novel framework to mine significant dependencies for unlabeled data. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10937 LNAI, pp. 106–117). Springer Verlag. https://doi.org/10.1007/978-3-319-93034-3_9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free