Kernel mean matching with a large margin

5Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Various instance weighting methods have been proposed for instance-based transfer learning. Kernel Mean Matching (KMM) is one of the typical instance weighting approaches which estimates the instance importance by matching the two distributions in the universal reproducing kernel Hilbert space (RKHS). However, KMM is an unsupervised learning approach which does not utilize the class label knowledge of the source data. In this paper, we extended KMM by leveraging the class label knowledge and integrated KMM and SVM into an unified optimization framework called KMM-LM (Large Margin). The objective of KMM-LM is to maximize the geometric soft margin, and minimize the empirical classification error together with the domain discrepancy based on KMM simultaneously. KMM-LM utilizes an iterative minimization algorithm to find the optimal weight vector of the classification decision hyperplane and the importance weight vector of the instances in the source domain. The experiments show that KMM-LM outperforms the state-of-the-art baselines. © Springer-Verlag 2012.

Cite

CITATION STYLE

APA

Tan, Q., Deng, H., & Yang, P. (2012). Kernel mean matching with a large margin. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7713 LNAI, pp. 223–234). https://doi.org/10.1007/978-3-642-35527-1_19

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free