Cost-complexity pruning of random forests

4Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Random forests perform boostrap-aggregation by sampling the training samples with replacement. This enables the evaluation of out-of-bag error which serves as a internal cross-validation mechanism. Our motivation lies in the using of the unsampled training samples to improve the ensemble of decision trees. In this paper we study the effect of using the out-of-bag samples to improve the generalization error first of the decision trees and second the random forest by post-pruning. A preliminary empirical study on four UCI repository datasets show consistent decrease in the size of the forests without considerable loss in accuracy.

Cite

CITATION STYLE

APA

Kiran, B. R., & Serra, J. (2017). Cost-complexity pruning of random forests. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10225 LNCS, pp. 222–232). Springer Verlag. https://doi.org/10.1007/978-3-319-57240-6_18

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free