Computing Optimal Attribute Weight Settings for Nearest Neighbor Algorithms

17Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Nearest neighbor (NN) learning algorithms, examples of the lazy learning paradigm, rely on a distance function to measure the similarity of testing examples with the stored training examples. Since certain attributes are more discriminative, while others can be less or totally irrelevant, attributes should be weighed differently in the distance function. Most previous studies on weight setting for NN learning algorithms are empirical. In this paper we describe our attempt on deciding theoretically optimal weights that minimize the predictive error for NN algorithms. Assuming a uniform distribution of examples in a 2-d continuous space, we first derive the average predictive error introduced by a linear classification boundary, and then determine the optimal weight setting for any polygonal classification region. Our theoretical results of optimal attribute weights can serve as a baseline or lower bound for comparing other empirical weight setting methods.

Cite

CITATION STYLE

APA

Ling, C. X., & Wang, H. (1997). Computing Optimal Attribute Weight Settings for Nearest Neighbor Algorithms. Artificial Intelligence Review, 11(1–5), 255–272. https://doi.org/10.1007/978-94-017-2053-3_10

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free