Splitting data in decision trees using the new false-positives criterion

0Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Classification is a widely used technique in various fields, including data mining and statistical data analysis. Decision trees are one of the most frequently occurring knowledge representation schemes used in classification algorithms. Decision trees can offer a more practical way of capturing knowledge than coding rules in more conventional languages. Decision trees are generally constructed by means of a top down growth procedure, which starts from the root node and greedily chooses a split of the data that maximizes some cost function. The order, in which attributes are chosen, according to the cost function, determines how efficient the decision tree is. Gain, Gain ratio, Gini and Twoing are some of the most famous splitting criteria used in calculating the cost function. In this paper, we propose a new splitting criterion, namely the False-Positives criterion. The key idea behind the False-Positives criterion is to consider the instances having the most frequent class value, with respect to a certain attribute value, as true-positives and all the instances having the rest class values, with respect to that attribute value, as false positives. We present extensive empirical tests, which demonstrate the efficiency of the proposed criterion.

Cite

CITATION STYLE

APA

Boutsinas, B., & Tsekouronas, I. X. (2004). Splitting data in decision trees using the new false-positives criterion. In Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science) (Vol. 3025, pp. 174–182). Springer Verlag. https://doi.org/10.1007/978-3-540-24674-9_19

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free