SVM ensembles are betterwhen different kernel types are combined

8Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Support vector machines (SVM) are strong classifiers, but large datasets might lead to prohibitively long computation times and high memory requirements. SVM ensembles, where each single SVM sees only a fraction of the data, can be an approach to overcome this barrier. In continuation of related work in this field we construct SVM ensembles with Bagging and Boosting. As a new idea we analyze SVM ensembles with different kernel types (linear, polynomial, RBF) involved inside the ensemble. The goal is to train one strong SVM ensemble classifier for large datasets with less time and memory requirements than a single SVM on all data. From our experiments we find evidence for the following facts: Combining different kernel types can lead to an ensemble classifier stronger than each individual SVM on all training data and stronger than ensembles from a single kernel type alone. Boosting is only productive if we make each single SVM sufficiently weak, otherwise we observe overfitting. Even for very small training sample sizes—and thus greatly reduced time and memory requirements—the ensemble approach often delivers accuracies similar or close to a single SVM trained on all data.

Cite

CITATION STYLE

APA

Stork, J., Ramos, R., Koch, P., & Konen, W. (2015). SVM ensembles are betterwhen different kernel types are combined. In Studies in Classification, Data Analysis, and Knowledge Organization (Vol. 48, pp. 191–201). Kluwer Academic Publishers. https://doi.org/10.1007/978-3-662-44983-7_17

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free