Thresholding gradient methods in Hilbert spaces: Support identification and linear convergence

4Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

Abstract

We study the l 1 regularized least squares optimization problem in a separable Hilbert space. We show that the iterative soft-thresholding algorithm (ISTA) converges linearly, without making any assumption on the linear operator into play or on the problem. The result is obtained combining two key concepts: the notion of extended support, a finite set containing the support, and the notion of conditioning over finite-dimensional sets. We prove that ISTA identifies the solution extended support after a finite number of iterations, and we derive linear convergence from the conditioning property, which is always satisfied for l 1 regularized least squares problems. Our analysis extends to the entire class of thresholding gradient algorithms, for which we provide a conceptually new proof of strong convergence, as well as convergence rates.

Cite

CITATION STYLE

APA

Garrigos, G., Rosasco, L., & Villa, S. (2020). Thresholding gradient methods in Hilbert spaces: Support identification and linear convergence. ESAIM - Control, Optimisation and Calculus of Variations, 26. https://doi.org/10.1051/cocv/2019011

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free