The consistency of greedy algorithms for classification

20Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We consider a class of algorithms for classification, which are based on sequential greedy minimization of a convex upper bound on the 0 - 1 loss function. A large class of recently popular algorithms falls within the scope of this approach, including many variants of Boosting algorithms. The basic question addressed in this paper relates to the statistical consistency of such approaches. We provide precise conditions which guarantee that sequential greedy procedures are consistent, and establish rates of convergence under the assumption that the Bayes decision boundary belongs to a certain class of smooth functions. The results are established using a form of regularization which constrains the search space at each iteration of the algorithm. In addition to providing general consistency results, we provide rates of convergence for smooth decision boundaries. A particularly interesting conclusion of our work is that Logistic function based Boosting provides faster rates of convergence than Boosting based on the exponential function used in AdaBoost.

Cite

CITATION STYLE

APA

Mannor, S., Meir, R., & Zhang, T. (2002). The consistency of greedy algorithms for classification. In Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science) (Vol. 2375, pp. 319–333). Springer Verlag. https://doi.org/10.1007/3-540-45435-7_22

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free