A consistent strategy for boosting algorithms

11Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The probability of error of classification methods based on convex combinations of simple base classifiers by "boosting" algorithms is investigated. The main result of the paper is that certain regularized boosting algorithms provide Bayes-risk consistent classifiers under the only assumption that the Bayes classifier may be approximated by a convex combination of the base classifiers. Non-asymptotic distribution-free bounds are also developed which offer interesting new insight into how boosting works and help explain their success in practical classification problems. © Springer-Verlag Berlin Heidelberg 2002.

Cite

CITATION STYLE

APA

Lugosi, G., & Vayatis, N. (2002). A consistent strategy for boosting algorithms. Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science), 2375, 303–319. https://doi.org/10.1007/3-540-45435-7_21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free