Empirical comparison of boosting algorithms

1Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Boosting algorithms combine moderately accurate classifiers in order to produce highly accurate ones. The most important boosting algorithms are Adaboost and Arc-x(j). While belonging to the same algorithms family, they differ in the way of combining classifiers. Adaboost uses weighted majority vote while Arc-x(j) combines them through simple majority vote. Breiman (1998) obtains the best results for Arc-x(j) with j = 4 but higher values were not tested. Two other values for j, j = 8 and j = 12 are tested and compared to the previous one and to Adaboost. Based on several real binary databases, empirical comparison shows that Arc-x4 outperforms all other algorithms. © Springer-Verlag Berlin, Heidelberg 2005.

Cite

CITATION STYLE

APA

Khanchel, R., & Limam, M. (2005). Empirical comparison of boosting algorithms. In Studies in Classification, Data Analysis, and Knowledge Organization (pp. 161–167). Kluwer Academic Publishers. https://doi.org/10.1007/3-540-28084-7_16

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free