K- Support Vector Nearest Neighbor: Classification Method, Data Reduction, and Performance Comparison

  • Prasetyo E
N/ACitations
Citations of this article
9Readers
Mendeley users who have this article in their library.

Abstract

The use of data mining in the past 2 decades in harnessing the data sets become important. This is due to the information given outcome becomes very important, but the big problem are the obstacles data mining task is a very large amount of data. A very large number indeed specificity of data mining in extracting information, but the amount of too big data also cause decrease the performance. On the issue of classification, data that are not positioned on the decision boundary becomes less useful and make classification method is not efficient. K-Nearest Neighbor Support Vector present to answer the problem that data is normally owned by very large data. K-SVNN able to reduce the amount of very large data with good accuracy without degrading performance. Results of performance comparisons with a number of classification method also proves that K-SVNN can provide good accuracy. Among the five comparison methods, K-SVNN got in the big 3 methods. K-SVNN difference accuracy to other methods less of 0.66% on the data set Iris and 20:29% on the data set Wine.

Cite

CITATION STYLE

APA

Prasetyo, E. (2016). K- Support Vector Nearest Neighbor: Classification Method, Data Reduction, and Performance Comparison. JEECS (Journal of Electrical Engineering and Computer Sciences), 1(1), 1–6. https://doi.org/10.54732/jeecs.v1i1.180

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free