Generalization error analysis for polynomial kernel methods - Algebraic geometrical approach

7Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The generalization properties of learning classifiers with a polynomial kernel function are examined here. We first show that the generalization error of the learning machine depends on the properties of the separating curve, that is, the intersection of the input surface and the true separating hyperplane in the feature space. When the input space is one-dimensional, the problem is decomposed to as many one-dimensional problems as the number of the intersecting points. Otherwise, the generalization error is determined by the class of the separating curve. Next, we consider how the class of the separating curve depends on the true separating function. The class is maximum when the true separating polynomial function is irreducible and smaller otherwise. In either case, the class depends only on the true function and does not on the dimension of the feature space. The results imply that the generalization error does not increase even when the dimension of the feature space gets larger and that the so-called overmodeling does not occur in the kernel learning. © Springer-Verlag Berlin Heidelberg 2003.

Cite

CITATION STYLE

APA

Ikeda, K. (2003). Generalization error analysis for polynomial kernel methods - Algebraic geometrical approach. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2714, 201–208. https://doi.org/10.1007/3-540-44989-2_25

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free