Flexible nonparametric kernel learning with different loss functions

0Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Side information is highly useful in the learning of a nonparametric kernel matrix. However, this often leads to an expensive semidefinite program (SDP). In recent years, a number of dedicated solvers have been proposed. Though much better than off-the-shelf SDP solvers, they still cannot scale to large data sets. In this paper, we propose a novel solver based on the alternating direction method of multipliers (ADMM). The key idea is to use a low-rank decomposition of the kernel matrix Z = XTY , with the constraint that X = Y . The resultant optimization problem, though non-convex, has favorable convergence properties and can be efficiently solved without requiring eigen-decomposition in each iteration. Experimental results on a number of real-world data sets demonstrate that the proposed method is as accurate as directly solving the SDP, but can be one to two orders of magnitude faster. © Springer-Verlag 2013.

Cite

CITATION STYLE

APA

Hu, E. L., & Kwok, J. T. (2013). Flexible nonparametric kernel learning with different loss functions. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8227 LNCS, pp. 116–123). https://doi.org/10.1007/978-3-642-42042-9_15

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free