Improved generalization through learning a similarity metric and kernel size

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Nearest-neighbour interpolation algorithms have many useful properties for applications to learning, but they often exhibit poor generalization. In this paper, it is shown that much better generalization can be obtained by using a variable interpolation kernel in combination with conjugate gradient optimization of the similarity metric and kernel size. The resulting method is called variable-kernel similarity metric (VSM) learning. It has been tested on a number of standard classification data sets, and on these problems it shows better generalization than back propagation and most other learning methods. An important advantage is that the system can operate as a black box in which no model minimization parameters need to be experimentally set by the user. The number of parameters that must be determined through optimization are orders of magnitude less than for back-propagation or RBF networks, which may indicate that the method better captures the essential degrees of variation in learning.

Cite

CITATION STYLE

APA

Lowe, D. G. (1993). Improved generalization through learning a similarity metric and kernel size. In Proceedings of the International Joint Conference on Neural Networks (Vol. 1, pp. 501–504). Publ by IEEE. https://doi.org/10.1109/ijcnn.1993.713963

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free