Generalization error for multi-class margin classification

17Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

In this article, we study rates of convergence of the generalization error of multi-class margin classifiers. In particular, we develop an upper bound theory quantifying the generalization error of various large margin classifiers. The theory permits a treatment of general margin losses, convex or nonconvex, in presence or absence of a dominating class. Three main results are established. First, for any fixed margin loss, there may be a trade-off between the ideal and actual generalization performances with respect to the choice of the class of candidate decision functions, which is governed by the trade-off between the approximation and estimation errors. In fact, different margin losses lead to different ideal or actual performances in specific cases. Second, we demonstrate, in a problem of linear learning, that the convergence rate can be arbitrarily fast in the sample size n depending on the joint distribution of the input/output pair. This goes beyond the anticipated rate O(n−1). Third, we establish rates of convergence of several margin classifiers in feature selection with the number of candidate variables p allowed to greatly exceed the sample size n but no faster than exp(n). © 2007, Ashdin Publishing. All rights reserved.

Cite

CITATION STYLE

APA

Shen, X., & Wang, L. (2007). Generalization error for multi-class margin classification. Electronic Journal of Statistics, 1, 307–330. https://doi.org/10.1214/07-EJS069

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free