In this paper we present new bounds on the generalization error of a classifier f constructed as a convex combination of base classifiers from the class H. The algorithms of combining simple classifiers into a complex one, such as boosting and bagging, have attracted a lot of attention. We obtain new sharper bounds on the generalization error of combined classifiers that take into account both the empirical distribution of “classification margins” and the “approximate dimension” of the classifier, which is defined in terms of weights assigned to base classifiers by a voting algorithm. We study the performance of these bounds in several experiments with learning algorithms.
CITATION STYLE
Koltchinskii, V., Panchenko, D., & Lozano, F. (2001). Further explanation of the effectiveness of voting methods: The game between margins and weights. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2111, pp. 241–255). Springer Verlag. https://doi.org/10.1007/3-540-44581-1_16
Mendeley helps you to discover research relevant for your work.