A variable metric probabilistic k-nearest-neighbours classifier

8Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

k-nearest neighbour (k-nn) model is a simple, popular classifier. Probabilistic k-nn is a more powerful variant in which the model is cast in a Bayesian framework using (reversible jump) Markov chain Monte Carlo methods to average out the uncertainy over the model parameters. The k-nn classifier depends crucially on the metric used to determine distances between data points. However, scalings between features, and indeed whether some subset of features is redundant, are seldom known a priori. Here we introduce a variable metric extension to the probabilistic k-nn classifier, which permits averaging over all rotations and scalings of the data. In addition, the method permits automatic rejection of irrelevant features. Examples are provided on synthetic data, illustrating how the method can deform feature space and select salient features, and also on real-world data. © Springer-Verlag Berlin Heidelberg 2004.

Cite

CITATION STYLE

APA

Everson, R. M., & Fieldsend, J. E. (2004). A variable metric probabilistic k-nearest-neighbours classifier. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 3177, 654–659. https://doi.org/10.1007/978-3-540-28651-6_96

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free