Learning from unbalanced datasets presents a convoluted problem in which traditional learning algorithms may perform poorly. The objective functions used for learning the classifiers typically tend to favor the larger, less important classes in such problems. This paper compares the performance of several popular decision tree splitting criteria - information gain, Gini measure, and DKM - and identifies a new skew insensitive measure in Hellinger distance. We outline the strengths of Hellinger distance in class imbalance, proposes its application in forming decision trees, and performs a comprehensive comparative analysis between each decision tree construction method. In addition, we consider the performance of each tree within a powerful sampling wrapper framework to capture the interaction of the splitting metric and sampling. We evaluate over this wide range of datasets and determine which operate best under class imbalance. © 2008 Springer-Verlag Berlin Heidelberg.
CITATION STYLE
Cieslak, D. A., & Chawla, N. V. (2008). Learning decision trees for unbalanced data. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5211 LNAI, pp. 241–256). https://doi.org/10.1007/978-3-540-87479-9_34
Mendeley helps you to discover research relevant for your work.