Using model trees and their ensembles for imbalanced data

6Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Model trees are decision trees with linear regression functions at the leaves. Although originally proposed for regression, they have also been applied successfully in classification problems. This paper studies their performance for imbalanced problems. These trees give better results that standard decision trees (J48, based on C4.5) and decision trees specific for imbalanced data (CCPDT: Class Confidence Proportion Decision Trees). Moreover, different ensemble methods are considered using these trees as base classifiers: Bagging, Random Subspaces, AdaBoost, MultiBoost, LogitBoost and specific methods for imbalanced data: Random Undersampling and SMOTE. Ensembles of Model Trees also give better results than ensembles of the other considered trees. © 2011 Springer-Verlag.

Cite

CITATION STYLE

APA

Rodríguez, J. J., Díez-Pastor, J. F., García-Osorio, C., & Santos, P. (2011). Using model trees and their ensembles for imbalanced data. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7023 LNAI, pp. 94–103). https://doi.org/10.1007/978-3-642-25274-7_10

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free