Enhancing Weak Nodes in Decision Tree Algorithm Using Data Augmentation

2Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

Abstract

Decision trees are among the most popular classifiers in machine learning, artificial intelligence, and pattern recognition because they are accurate and easy to interpret. During the tree construction, a node containing too few observations (weak node) could still get split, and then the resulted split is unreliable and statistically has no value. Many existing machine-learning methods can resolve this issue, such as pruning, which removes the tree's non-meaningful parts. This paper deals with the weak nodes differently; we introduce a new algorithm Enhancing Weak Nodes in Decision Tree (EWNDT), which reinforces them by increasing their data from other similar tree nodes. We called the data augmentation a virtual merging because we temporarily recalculate the best splitting attribute and the best threshold in the weak node. We have used two approaches to defining the similarity between two nodes. The experimental results are verified using benchmark datasets from the UCI machine-learning repository. The results indicate that the EWNDT algorithm gives a good performance.

Cite

CITATION STYLE

APA

Manzali, Y., El Far, M., Chahhou, M., & Elmohajir, M. (2022). Enhancing Weak Nodes in Decision Tree Algorithm Using Data Augmentation. Cybernetics and Information Technologies, 22(2), 50–65. https://doi.org/10.2478/cait-2022-0016

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free