Recent years have been wide efforts in attribute selection research. Attribute selection can efficiently reduce the hypothesis space by removing irrelevant and redundant attributes. Attribute reduction of an information system is a key problem in rough set theory and its applications. In this paper, we compare the performance of attribute selection using two technical tools namely WEKA 3.7 and ROSE2. Filter methods are used an alternative measure instead of the error rate to score a feature subset. This measure was chosen to be fast to compute, at the same time as still capturing the usefulness of the feature set. Many filters provide a feature ranking rather than an explicit best feature subset, and the cutoff point in the ranking is chosen via cross-validation. We used Search methods like Best first and Greedy stepwise to evaluate a subset of features as a group for suitability. We use the internet usage data set for this purpose and then comparison results are tabulated for various methods for searching the solution space to eliminate the irrelevant attribute. Results of this research shows us some minding issues of attribute selection tools where we found better ways to have select irrelevant attributes. Comparing the tools of attributes reductions evidence some considerable different between them.
CITATION STYLE
Sudha, M. (2014). Performance Comparison based on Attribute Selection Tools for Data Mining. Indian Journal of Science and Technology, 7(is7), 61–65. https://doi.org/10.17485/ijst/2014/v7sp7.5
Mendeley helps you to discover research relevant for your work.