Reducing the effect of out-voting problem in ensemble based incremental support vector machines

11Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Although Support Vector Machines (SVMs) have been successfully applied to solve a large number of classification and regression problems, they suffer from the catastrophic forgetting phenomenon. In our previous work, integrating the SVM classifiers into an ensemble framework using Learn++ (SVMLearn++) [1], we have shown that the SVM classifiers can in fact be equipped with the incremental learning capability. However, Learn++ suffers from an inherent out-voting problem: when asked to learn new classes, an unnecessarily large number of classifiers are generated to learn the new classes. In this paper, we propose a new ensemble based incremental learning approach using SVMs that is based on the incremental Learn++.MT algorithm. Experiments on the real-world and benchmark datasets show that the proposed approach can reduce the number of SVM classifiers generated, thus reduces the effect of out-voting problem. It also provides performance improvements over previous approach. © Springer-Verlag Berlin Heidelberg 2005.

Cite

CITATION STYLE

APA

Erdem, Z., Polikar, R., Gurgen, F., & Yumusak, N. (2005). Reducing the effect of out-voting problem in ensemble based incremental support vector machines. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 3697 LNCS, pp. 607–612). https://doi.org/10.1007/11550907_96

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free