Hyper-parameter tuning for graph kernels via multiple kernel learning

8Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Kernelized learning algorithms have seen a steady growth in popularity during the last decades. The procedure to estimate the performances of these kernels in real applications is typical computationally demanding due to the process of hyper-parameter selection. This is especially true for graph kernels, which are computationally quite expensive. In this paper, we study an approach that substitutes the commonly adopted procedure for kernel hyper-parameter selection by a multiple kernel learning procedure that learns a linear combination of kernel matrices obtained by the same kernel with different values for the hyper-parameters. Empirical results on real-world graph datasets show that the proposed methodology is faster than the baseline method when the number of parameter configurations is large, while always maintaining comparable and in some cases superior performances.

Cite

CITATION STYLE

APA

Massimo, C. M., Navarin, N., & Sperduti, A. (2016). Hyper-parameter tuning for graph kernels via multiple kernel learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9948 LNCS, pp. 214–223). Springer Verlag. https://doi.org/10.1007/978-3-319-46672-9_25

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free