Investigating the Effectiveness of Laplacian-Based Kernels in Hub Reduction

3Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

A “hub” is an object closely surrounded by, or very similar to, many other objects in the dataset. Recent studies by Radovanović et al. indicate that in high dimensional spaces, hubs almost always emerge, and objects close to the data centroid tend to become hubs. In this paper, we show that the family of kernels based on the graph Laplacian makes all objects in the dataset equally similar to the centroid, and thus they are expected to make less hubs when used as a similarity measure. We investigate this hypothesis using both synthetic and real-world data. It turns out that these kernels suppress hubs in some cases but not always, and the results seem to be affected by the size of the data. However, for the datasets in which hubs are indeed reduced by the Laplacian-based kernels, these kernels work well in ranking and classification tasks. This result suggests that the amount of hubs, which can be readily computed in an unsupervised fashion, can be a yardstick of whether Laplacian-based kernels work effectively for a given data.

Cite

CITATION STYLE

APA

Suzuki, I., Hara, K., Shimbo, M., Matsumoto, Y., & Saerens, M. (2012). Investigating the Effectiveness of Laplacian-Based Kernels in Hub Reduction. In Proceedings of the 26th AAAI Conference on Artificial Intelligence, AAAI 2012 (pp. 1112–1118). AAAI Press. https://doi.org/10.1609/aaai.v26i1.8295

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free