Multitask learning using regularized multiple kernel learning

4Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Empirical success of kernel-based learning algorithms is very much dependent on the kernel function used. Instead of using a single fixed kernel function, multiple kernel learning (MKL) algorithms learn a combination of different kernel functions in order to obtain a similarity measure that better matches the underlying problem. We study multitask learning (MKL) problems and formulate a novel MTL algorithm that trains coupled but nonidentical MKL models across the tasks. The proposed algorithm is especially useful for tasks that have different input and/or output space characteristics and is computationally very efficient. Empirical results on three data sets validate the generalization performance and the efficiency of our approach. © 2011 Springer-Verlag.

Cite

CITATION STYLE

APA

Gönen, M., Kandemir, M., & Kaski, S. (2011). Multitask learning using regularized multiple kernel learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7063 LNCS, pp. 500–509). https://doi.org/10.1007/978-3-642-24958-7_58

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free