Iterative Single Data Algorithm for Training Kernel Machines from Huge Data Sets: Theory and Performance

  • Kecman V
  • Huang T
  • Vogt M
N/ACitations
Citations of this article
66Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The chapter introduces the latest developments and results of Itera- tive Single Data Algorithm (ISDA) for solving large-scale support vector machines (SVMs) problems. First, the equality of a Kernel AdaTron (KA)method (originating from a gradient ascent learning approach) and the Sequential Minimal Optimiza- tion (SMO) learning algorithm (based on an analytic quadratic programming step for a model without bias term b) in designing SVMs with positive definite kernels is shown for both the nonlinear classification and the nonlinear regression tasks. The chapter also introduces the classic Gauss-Seidel procedure and its derivative known as the successive over-relaxation algorithm as viable (and usually faster) training al- gorithms. The convergence theorem for these related iterative algorithms is proven. The second part of the chapter presents the effects and the methods of incorporating explicit bias term b into the ISDA. The algorithms shown here implement the single training data based iteration routine (a.k.a. per-pattern learning). This makes the proposed ISDAs remarkably quick. The final solution in a dual domain is not an approximate one, but it is the optimal set of dual variables which would have been obtained by using any of existing and proven QP problem solvers if they only could deal with huge data sets. Key

Cite

CITATION STYLE

APA

Kecman, V., Huang, T.-M., & Vogt, M. (2005). Iterative Single Data Algorithm for Training Kernel Machines from Huge Data Sets: Theory and Performance (pp. 255–274). https://doi.org/10.1007/10984697_12

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free