Sparse low-rank separated representation models for learning from data

0Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We consider the problem of learning a multivariate function from a set of scattered observations using a sparse low-rank separated representation (SSR) model. The model structure considered here is promising for high-dimensional learning problems; however, existing training algorithms based on alternating least-squares (ALS) are known to have convergence difficulties, particularly when the rank of the model is greater than 1. In the present work, we supplement the model structure with sparsity constraints to ensure the well posedness of the approximation problem. We propose two fast training algorithms to estimate the model parameters: (i) a cyclic coordinate descent algorithm and (ii) a block coordinate descent (BCD) algorithm. While the first algorithm is not provably convergent owing to the non-convexity of the optimization problem, the BCD algorithm guarantees convergence to a Nash equilibrium point. The computational cost of the proposed algorithms is shown to scale linearly with respect to all of the parameters in contrast to methods based on ALS. Numerical studies on synthetic and real-world regression datasets indicate that the proposed SSR model structure holds significant potential for machine learning problems.

Cite

CITATION STYLE

APA

Audouze, C., & Nair, P. B. (2019). Sparse low-rank separated representation models for learning from data. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 475(2221). https://doi.org/10.1098/rspa.2018.0490

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free