A data driven stopping criterion for evolutionary instance selection

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Instance based classifiers, such as k-Nearest Neighbors, predict the class value of a new observation based on some distance or similarity measure between the new instance and the stored training data. However, due to the required distance calculations, classifying new instances becomes computationally expensive as the number of training observations increases. Therefore, instance selection techniques have been proposed to improve instance based classifiers by reducing the number of training instances that must be stored to achieve adequate classification rates. Although other methods exist, an evolutionary algorithm has been used for instance selection with some of the best results in regard to data reduction and preservation of classification accuracy. Unfortunately, the performance of the evolutionary algorithm for instance selection comes at the cost of longer computation times in comparison to classic instance selection techniques. In this work we introduce a new stopping criterion for the evolutionary algorithm which depends on the convergence of its fitness function. Experimentation shows that the new criterion results in less computation time while achieving comparable performance.

Cite

CITATION STYLE

APA

Bennette, W. D. (2017). A data driven stopping criterion for evolutionary instance selection. In Advances in Intelligent Systems and Computing (Vol. 513, pp. 407–420). Springer Verlag. https://doi.org/10.1007/978-3-319-46562-3_26

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free