Boosting algorithms combine moderately accurate classifiers in order to produce highly accurate ones. The most important boosting algorithms are Adaboost and Arc-x(j). While belonging to the same algorithms family, they differ in the way of combining classifiers. Adaboost uses weighted majority vote while Arc-x(j) combines them through simple majority vote. Breiman (1998) obtains the best results for Arc-x(j) with j = 4 but higher values were not tested. Two other values for j, j = 8 and j = 12 are tested and compared to the previous one and to Adaboost. Based on several real binary databases, empirical comparison shows that Arc-x4 outperforms all other algorithms. © Springer-Verlag Berlin, Heidelberg 2005.
CITATION STYLE
Khanchel, R., & Limam, M. (2005). Empirical comparison of boosting algorithms. In Studies in Classification, Data Analysis, and Knowledge Organization (pp. 161–167). Kluwer Academic Publishers. https://doi.org/10.1007/3-540-28084-7_16
Mendeley helps you to discover research relevant for your work.