Matrix variate Gaussian mixture distribution steered robust metric learning

5Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

Abstract

Mahalanobis Metric Learning (MML) has been actively studied recently in machine learning community. Most of existing MML methods aim to learn a powerful Mahalanobis distance for computing similarity of two objects. More recently, multiple methods use matrix norm regularizers to constrain the learned distance matrix M to improve the performance. However, in real applications, the structure of the distance matrix M is complicated and cannot be characterized well by the simple matrix norm. In this paper, we propose a novel robust metric learning method with learning the structure of the distance matrix in a new and natural way. We partition M into blocks and consider each block as a random matrix variate, which is fitted by matrix variate Gaussian mixture distribution. Different from existing methods, our model has no any assumption on M and automatically learns the structure of M from the real data, where the distance matrix M often is neither sparse nor low-rank. We design an effective algorithm to optimize the proposed model and establish the corresponding theoretical guarantee. We conduct extensive evaluations on the real-world data. Experimental results show our method consistently outperforms the related state-of-the-art methods.

Cite

CITATION STYLE

APA

Luo, L., & Huang, H. (2018). Matrix variate Gaussian mixture distribution steered robust metric learning. In 32nd AAAI Conference on Artificial Intelligence, AAAI 2018 (pp. 3722–3729). AAAI press. https://doi.org/10.1609/aaai.v32i1.11801

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free