Between two extremes: Examining decompositions of the ensemble objective function

6Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We study how the error of an ensemble regression estimator can be decomposed into two components: one accounting for the individual errors and the other accounting for the correlations within the ensemble. This is the well known Ambiguity decomposition; we show an alternative way to decompose the error, and show how both decompositions have been exploited in a learning scheme. Using a scaling parameter in the decomposition we can blend the gradient (and therefore the learning process) smoothly between two extremes, from concentrating on individual accuracies and ignoring diversity, up to a full non-linear optimization of all parameters, treating the ensemble as a single learning unit. We demonstrate how this also applies to ensembles using a soft combination of posterior probability estimates, so can be utilised for classifier ensembles. © Springer-Verlag Berlin Heidelberg 2005.

Cite

CITATION STYLE

APA

Brown, G., Wyatt, J., & Sun, P. (2005). Between two extremes: Examining decompositions of the ensemble objective function. In Lecture Notes in Computer Science (Vol. 3541, pp. 296–305). Springer Verlag. https://doi.org/10.1007/11494683_30

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free