Traditional classification algorithms often fail when the independent and identical distributed (i.i.d.) assumption does not hold, and the cross-domain learning emerges recently is to deal with this problem. Actually, we observe that though the trained model from training data may not perform well over all test data, it can give much better prediction results on a subset of the test data with high prediction confidence. Also this subset of data from test data set may have more similar distribution with the test data. In this study, we propose to construct the reliable data set with high prediction confidence, and use this reliable data as training data. Furthermore, we develop an EM algorithm to refine the model trained from the reliable data. The extensive experiments on text classification verify the effectiveness and efficiency of our methods. It is worth to mention that the model trained from the reliable data achieves a significant performance improvement compared with the one trained from the original training data, and our methods outperform all the baseline algorithms. © 2012 IFIP International Federation for Information Processing.
CITATION STYLE
Zhuang, F., He, Q., & Shi, Z. (2012). Effectively constructing reliable data for cross-domain text classification. In IFIP Advances in Information and Communication Technology (Vol. 385 AICT, pp. 16–27). https://doi.org/10.1007/978-3-642-32891-6_6
Mendeley helps you to discover research relevant for your work.