Abstract
Selecting the right set of features for classification is one of the most important problems in designing a good classifier. Decision tree induction algorithms such as C4.5 have incorporated in their learning phase an automatic feature selection strategy, while some other statistical classification algorithms require the feature subset to be selected in a preprocessing phase. It is well known that correlated and irrelevant features may degrade the performance of the C4.5 algorithm. In our study, we evaluated the influence of feature preselection on the prediction accuracy of C4.5 using a real-world data set. We observed that accuracy of the C4.5 classifier could be improved with an appropriate feature preselection phase for the learning algorithm. Beyond that, the number of features used for classification can be reduced, which is important for image interpretation tasks since feature calculation is a time-consuming process.
Cite
CITATION STYLE
Perner, P. (2001). Improving the accuracy of decision tree induction by feature preselection. Applied Artificial Intelligence, 15(8), 747–760. https://doi.org/10.1080/088395101317018582
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.