Sample-adaptive multiple kernel learning

39Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

Existing multiple kernel learning (MKL) algorithms indiscriminately apply a same set of kernel combination weights to all samples. However, the utility of base kernels could vary across samples and a base kernel useful for one sample could become noisy for another. In this case, rigidly applying a same set of kernel combination weights could adversely affect the learning performance. To improve this situation, we propose a sample-adaptive MKL algorithm, in which base kernels are allowed to be adaptively switched on/off with respect to each sample. We achieve this goal by assigning a latent binary variable to each base kernel when it is applied to a sample. The kernel combination weights and the iatent variables are jointly optimized via margin maximization principle. As demonstrated on five benchmark data sets, the proposed algorithm consistently outperforms the comparable ones in the literature.

Cite

CITATION STYLE

APA

Liu, X., Wang, L., Zhang, J., & Yin, J. (2014). Sample-adaptive multiple kernel learning. In Proceedings of the National Conference on Artificial Intelligence (Vol. 3, pp. 1975–1981). AI Access Foundation. https://doi.org/10.1609/aaai.v28i1.8983

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free