Low-rank similarity metric learning in high dimensions

33Citations
Citations of this article
49Readers
Mendeley users who have this article in their library.

Abstract

Metric learning has become a widespreadly used tool in machine learning. To reduce expensive costs brought in by increasing dimensionality, low-rank metric learning arises as it can be more economical in storage and computation. However, existing low-rank metric learning algorithms usually adopt nonconvex objectives, and are hence sensitive to the choice of a heuristic low-rank basis. In this paper, we propose a novel low-rank metric learning algorithm to yield bilinear similarity functions. This algorithm scales linearly with input dimensionality in both space and time, therefore applicable to high-dimensional data domains. A convex objective free of heuristics is formulated by leveraging trace norm regularization to promote low-rankness. Crucially, we prove that all globally optimal metric solutions must retain a certain low-rank structure, which enables our algorithm to decompose the high-dimensional learning task into two steps: an SVD-based projection and a metric learning problem with reduced dimensionality. The latter step can be tackled efficiently through employing a linearized Alternating Direction Method of Multipliers. The efficacy of the proposed algorithm is demonstrated through experiments performed on four benchmark datasets with tens of thousands of dimensions.

Cite

CITATION STYLE

APA

Liu, W., Mu, C., Ji, R., Ma, S., Smith, J. R., & Chang, S. F. (2015). Low-rank similarity metric learning in high dimensions. In Proceedings of the National Conference on Artificial Intelligence (Vol. 4, pp. 2792–2799). AI Access Foundation. https://doi.org/10.1609/aaai.v29i1.9639

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free