On boosting with optimal poly-bounded distributions

5Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In the paper, we construct a framework which allows to bound polynomially the distributions produced by certain boosting algorithms, without significant performance loss. Further, we study the case of Freund and Schapire’s AdaBoost algorithm, bounding its distributions to near-polynomial w.r.t. the example oracle’s distribution. An advantage of AdaBoost over other boosting techniques is that it doesn’t require an a-priori accuracy lower bound for the hypotheses accepted from the weak learner during the learning process. We turn AdaBoost into an on-line boosting algorithm (boosting “by filtering”), which can be applied to the wider range of learning problems. In particular, now AdaBoost applies to the problem of DNF-learning, answering affirmatively the question posed by Jackson. We also construct a hybrid boosting algorithm, in that way achieving the lowest bound possible for booster-produced distributions (in terms of Õ), and show a possible application to the problem of DNF-learning w.r.t. the uniform.

Cite

CITATION STYLE

APA

Bshouty, N. H., & Gavinsky, D. (2001). On boosting with optimal poly-bounded distributions. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2111, pp. 490–506). Springer Verlag. https://doi.org/10.1007/3-540-44581-1_32

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free