In this paper, we examine ensemble algorithms (Boosting Lite and Ivoting) that provide accuracy approximating a single classifier, but which require significantly fewer training examples. Such algorithms allow ensemble methods to operate on very large data sets or use very slow learning algorithms. Boosting Lite is compared with Ivoting, standard boosting, and building a single classifier. Comparisons are done on 11 data sets to which other approaches have been applied. We find that ensembles of support vector machines can attain higher accuracy with less data than ensembles of decision trees. We find that Ivoting may result in higher accuracy ensembles on some data sets, however Boosting Lite is generally able to indicate when boosting will increase overall accuracy. © Springer-Verlag Berlin Heidelberg 2007.
CITATION STYLE
Hall, L. O., Banfield, R. E., Bowyer, K. W., & Kegelmeyer, W. P. (2007). Boosting lite - Handling larger datasets and slower base classifiers. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4472 LNCS, pp. 161–170). Springer Verlag. https://doi.org/10.1007/978-3-540-72523-7_17
Mendeley helps you to discover research relevant for your work.