Dynamics of variance reduction in bagging and other techniques based on randomisation

14Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper the performance of bagging in classification problems is theoretically analysed, using a framework developed in works by Tumer and Ghosh and extended by the authors. A bias-variance decomposition is derived, which relates the expected misclassification probability attained by linearly combining classifiers trained on N bootstrap replicates of a fixed training set to that attained by a single bootstrap replicate of the same training set. Theoretical results show that the expected misclassification probability of bagging has the same bias component as a single bootstrap replicate, while the variance component is reduced by a factor N. Experimental results show that the performance of bagging as a function of the number of bootstrap replicates follows quite well our theoretical prediction. It is finally shown that theoretical results derived for bagging also apply to other methods for constructing multiple classifiers based on randomisation, such as the random subspace method and tree randomisation. © Springer-Verlag Berlin Heidelberg 2005.

Cite

CITATION STYLE

APA

Fumera, G., Roli, F., & Serrau, A. (2005). Dynamics of variance reduction in bagging and other techniques based on randomisation. In Lecture Notes in Computer Science (Vol. 3541, pp. 316–325). Springer Verlag. https://doi.org/10.1007/11494683_32

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free