Pre-pruning classification trees to reduce overfitting in noisy domains

12Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The automatic induction of classification rules from examples in the form of a classification tree is an important technique used in data mining. One of the problems encountered is the overfitting of rules to training data. In some cases this can lead to an excessively large number of rules, many of which have very little predictive value for unseen data. This paper describes a means of reducing overfitting known as J-pruning, based on the J-measure, an information theoretic means of quantifying the information content of a rule. It is demonstrated that using J-pruning generally leads to a substantial reduction in the number of rules generated and an increase in predictive accuracy. The advantage gained becomes more pronounced as the proportion of noise increases.

Cite

CITATION STYLE

APA

Bramer, M. (2002). Pre-pruning classification trees to reduce overfitting in noisy domains. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2412, pp. 7–12). Springer Verlag. https://doi.org/10.1007/3-540-45675-9_2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free