Kernels for multi-task learning

ISSN: 10495258
55Citations
Citations of this article
159Readers
Mendeley users who have this article in their library.

Abstract

This paper provides a foundation for multi-task learning using reproducing kernel Hilbert spaces of vector-valued functions. In this setting, the kernel is a matrix-valued function. Some explicit examples will be described which go beyond our earlier results in [7]. In particular, we characterize classes of matrix- valued kernels which are linear and are of the dot product or the translation invariant type. We discuss how these kernels can be used to model relations between the tasks and present linear multi-task learning algorithms. Finally, we present a novel proof of the representer theorem for a minimizer of a regularization functional which is based on the notion of minimal norm interpolation.

Cite

CITATION STYLE

APA

Micchelli, C. A., & Pontil, M. (2005). Kernels for multi-task learning. In Advances in Neural Information Processing Systems. Neural information processing systems foundation.

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free