Geometric bounds for generalization in boosting

8Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We consider geometric conditions on a labeled data set which guarantee that boosting algorithms work well when linear classifiers are used as weak learners. We start by providing conditions on the error of the weak learner which guarantee that the empirical error of the composite classifier is small. We then focus on conditions required in order to insure that the linear weak learner itself achieves an error which is smaller than 1/2 − γ, where the advantage parameter γ is strictly positive and independent of the sample size. Such a condition guarantees that the generalization error of the boosted classifier decays to its minimal value at a rate of 1/ √m, where m is the sample size. The required conditions, which are based solely on geometric concepts, can be easily verified for any data set in time O(m2), and may serve as an indication for the effectiveness of linear classifiers as weak learners for a particular data set.

Cite

CITATION STYLE

APA

Mannor, S., & Meir, R. (2001). Geometric bounds for generalization in boosting. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2111, pp. 461–472). Springer Verlag. https://doi.org/10.1007/3-540-44581-1_30

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free