Kernel-based metric learning for semi-supervised clustering

46Citations
Citations of this article
36Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Distance metric plays an important role in many machine learning algorithms. Recently, there has been growing interest in distance metric learning for semi-supervised setting. In the last few years, many methods have been proposed for metric learning when pairwise similarity (must-link) and/or dissimilarity (cannot-link) constraints are available along with unlabeled data. Most of these methods learn a global Mahalanobis metric (or equivalently, a linear transformation). Although some recently introduced methods have devised nonlinear extensions of linear metric learning methods, they usually allow only limited forms of distance metrics and also can use only similarity constraints. In this paper, we propose a nonlinear metric learning method that learns a completely flexible distance metric via learning a nonparametric kernel matrix. The proposed method uses both similarity and dissimilarity constraints and also the topological structure of the data to learn an appropriate distance metric. Our method is formulated as a convex optimization problem for learning a kernel matrix. This convex problem allows us to give a local-optimum-free metric learning method. Experimental results on synthetic and real-world data sets show that the proposed method outperforms the recently introduced metric learning methods for semi-supervised clustering. © 2009 Elsevier B.V. All rights reserved.

Cite

CITATION STYLE

APA

Soleymani Baghshah, M., & Bagheri Shouraki, S. (2010). Kernel-based metric learning for semi-supervised clustering. Neurocomputing, 73(7–9), 1352–1361. https://doi.org/10.1016/j.neucom.2009.12.009

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free