Kernel combination versus classifier combination

54Citations
Citations of this article
30Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Combining classifiers is to join the strengths of different classifiers to improve the classification performance. Using rules to combine the outputs of different classifiers is the basic structure of classifier combination. Fusing models from different kernel machine classifiers is another strategy for combining models called kernel combination. Although classifier combination and kernel combination are very different strategies for combining classifier, they aim to reach the same goal by very similar fundamental concepts. We propose here a compositional method for kernel combination. The new composed kernel matrix is an extension and union of the original kernel matrices. Generally, kernel combination approaches relied heavily on the training data and had to learn some weights to indicate the importance of each kernel. Our compositional method avoids learning any weight and the importance of the kernel functions are directly derived in the process of learning kernel machines. The performance of the proposed kernel combination procedure is illustrated by some experiments in comparison with classifier combining based on the same kernels. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Lee, W. J., Verzakov, S., & Duin, R. P. W. (2007). Kernel combination versus classifier combination. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4472 LNCS, pp. 22–31). Springer Verlag. https://doi.org/10.1007/978-3-540-72523-7_3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free