Batch support vector training based on exact incremental training

4Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper we discuss training support vector machines (SVMs) repetitively solving a set of linear equations, which is an extension of accurate incremental training proposed by Cauwenberghs and Poggio. First, we select two data in different classes and determine the optimal separating hyperplane. Then, we divide the training data set into several chunk data sets and set the two data to the active set, which includes current and previous support vectors. For the combined set of the active set and a chunk data set, we detect the datum that maximally violates the Karush-Kuhn-Tacker (KKT) conditions, and modify the optimal hyperplane by solving a set of equations that constrain the margins of unbounded support vectors and the optimality of the bias term. We iterate this procedure until there are no violating data in any combined set. By the computer experiment using several benchmark data sets, we show that the training speed of the proposed method is comparable with the primal-dual interior-point method combined with the decomposition technique, usually with a smaller number of support vectors. © Springer-Verlag Berlin Heidelberg 2008.

Cite

CITATION STYLE

APA

Abe, S. (2008). Batch support vector training based on exact incremental training. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5163 LNCS, pp. 295–304). https://doi.org/10.1007/978-3-540-87536-9_31

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free