Bagging classification models with reduced bootstrap

2Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Bagging is an ensemble method proposed to improve the predictive performance of learning algorithms, being specially effective when applied to unstable predictors. It is based on the aggregation of a certain number of prediction models, each one generated from a bootstrap sample of the available training set. We introduce an alternative method for bagging classification models, motivated by the reduced bootstrap methodology, where the generated bootstrap samples are forced to have a number of distinct original observations between two values k1, and k2 Five choices for k 1 and k2 are considered, and the five resulting models are empirically studied and compared with bagging on three real data sets, employing classification trees and neural networks as the base learners. This comparison reveals for this reduced bagging technique a trend to diminish the mean and the variance of the error rate. © Springer-Verlag 2004.

Cite

CITATION STYLE

APA

Pino-Mejías, R., Cubiles-de-la-Vega, M. D., López-Coello, M., Silva-Ramirez, E. L., & Jimenez-Gamero, M. D. (2004). Bagging classification models with reduced bootstrap. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 3138, 966–973. https://doi.org/10.1007/978-3-540-27868-9_106

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free