The generalization properties of learning classifiers with a polynomial kernel function are examined here. We first show that the generalization error of the learning machine depends on the properties of the separating curve, that is, the intersection of the input surface and the true separating hyperplane in the feature space. When the input space is one-dimensional, the problem is decomposed to as many one-dimensional problems as the number of the intersecting points. Otherwise, the generalization error is determined by the class of the separating curve. Next, we consider how the class of the separating curve depends on the true separating function. The class is maximum when the true separating polynomial function is irreducible and smaller otherwise. In either case, the class depends only on the true function and does not on the dimension of the feature space. The results imply that the generalization error does not increase even when the dimension of the feature space gets larger and that the so-called overmodeling does not occur in the kernel learning. © Springer-Verlag Berlin Heidelberg 2003.
CITATION STYLE
Ikeda, K. (2003). Generalization error analysis for polynomial kernel methods - Algebraic geometrical approach. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2714, 201–208. https://doi.org/10.1007/3-540-44989-2_25
Mendeley helps you to discover research relevant for your work.