Varying k-Nearest Neighbours: An Attempt to Improve a Widely Used Classification Model

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A very important part of machine learning is classification. One of the most important classification algorithms is the k-nearest neighbour algorithm. This algorithm works on the principle that a point belongs to the same class as the majority of its closest neighbours. In this approach, a point qualifies as one of the closest neighbours of another point based on the parameter ‘k’. We determine whether a point has a bearing on another neighbour’s class if it is one of the first ‘k’-nearest neighbours of the point in question. A major disadvantage of this algorithm is the fact that we are bound to take exactly ‘k’-nearest neighbours irrespective of the distance between a point and the neighbour. This is a major anomaly. We decide that we shall determine the importance of a point in an algorithm based on proximity to another point but do not pay any heed to the distance between those two points. Many solutions have been proposed to reduce the effect of this problem in the algorithm. This paper gets rid of this problem altogether and adds another dimension to the nearest neighbour classification model.

Cite

CITATION STYLE

APA

Bandyopadhyay, R. (2020). Varying k-Nearest Neighbours: An Attempt to Improve a Widely Used Classification Model. In Smart Innovation, Systems and Technologies (Vol. 159, pp. 1–8). Springer. https://doi.org/10.1007/978-981-13-9282-5_1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free