Selecting the right set of features for classification is one of the most important problems in designing a good classifier. Decision tree induction algorithms such as C4.5 have incorporated in their learning phase an automatic feature selection strategy while some other statistical classification algorithm require the feature subset to be selected in a preprocessing phase. It is well know that correlated and irrelevant features may degrade the performance of the C4.5 algorithm. In our study, we evaluated the influence of feature pre-selection on the prediction accuracy of C4.5 using a real-world data set. We observed that accuracy of the C4.5 classifier can be improved with an appropriate feature preselection phase for the learning algorithm.
CITATION STYLE
Perner, P., & Apte, C. (2000). Empirical evaluation of feature subset selection based on a real-world data set. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1910, pp. 575–580). Springer Verlag. https://doi.org/10.1007/3-540-45372-5_68
Mendeley helps you to discover research relevant for your work.