This chapter examines some alternative strategies for selecting attributes at each stage of the TDIDT decision tree generation algorithm and compares the size of the resulting trees for a number of datasets. The risk of obtaining decision trees that are entirely meaningless is highlighted, pointing to the importance of a good choice of attribute selection strategy. One of the most widely used strategies is based on minimising entropy (or equivalently maximising information gain) and this approach is illustrated in detail.
CITATION STYLE
Bramer, M. (2020). Decision Tree Induction: Using Entropy for Attribute Selection (pp. 49–62). https://doi.org/10.1007/978-1-4471-7493-6_5
Mendeley helps you to discover research relevant for your work.