Reducing Hubs with Laplacian-based Kernels

0Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

A “hub” is an object closely surrounded by, or very similar to, many other objects in the dataset. Recent studies by Radovanovic et al. demonstrated that in high dimensional spaces, objects close to the data centroid tend to become hubs. In this paper, we show that the family of kernels based on the graph Laplacian makes all objects in the dataset equally similar to the centroid, and thus they are expected to make less hubs when used as a similarity measure. We investigate this hypothesis using both synthetic and real-world data. It turns out that these kernels suppress hubs in some cases but not always, and the results seem to be affected by the size of the data—a factor not discussed previously. However, for the datasets in which hubs are indeed reduced by the Laplacian-based kernels, these kernels work well in classification and information retrieval tasks. This result suggests that the amount of hubs, which can be readily computed in an unsupervised fashion, can be a yardstick of whether Laplacian-based kernels work effectively for a given data. keywords: hubness, graph Laplacian, kernel. © 2013, The Japanese Society for Artificial Intelligence. All rights reserved.

Cite

CITATION STYLE

APA

Suzuki, I., Hara, K., Shimbo, M., & Matsumoto, Y. (2013). Reducing Hubs with Laplacian-based Kernels. Transactions of the Japanese Society for Artificial Intelligence, 28(3), 297–310. https://doi.org/10.1527/tjsai.28.297

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free