The Trimmed Lasso: Sparse Recovery Guarantees and Practical Optimization by the Generalized Soft-Min Penalty

12Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

Abstract

We present a new approach to solve the sparse approximation or best subset selection problem, namely find a k-sparse vector x ∊ ℝd that minimizes the ℓ2 residual ||Ax - y||2. We consider a regularized approach, whereby this residual is penalized by the nonconvex trimmed lasso, defined as the ℓ1-norm of x excluding its k largest-magnitude entries. We prove that the trimmed lasso has several appealing theoretical properties, and in particular derive sparse recovery guarantees assuming successful optimization of the penalized objective. Next, we show empirically that directly optimizing this objective can be quite challenging. Instead, we propose a surrogate for the trimmed lasso, called the generalized soft-min. This penalty smoothly interpolates between the classical lasso and the trimmed lasso, while taking into account all possible k-sparse patterns. The generalized soft-min penalty involves summation over (k/d) terms, yet we derive a polynomial-time algorithm to compute it. This, in turn, yields a practical method for the original sparse approximation problem. Via simulations, we demonstrate its competitive performance compared to current state of the art.

Cite

CITATION STYLE

APA

Amir, T., Basri, R., & Nadler, B. (2021). The Trimmed Lasso: Sparse Recovery Guarantees and Practical Optimization by the Generalized Soft-Min Penalty. SIAM Journal on Mathematics of Data Science, 3(3), 900–929. https://doi.org/10.1137/20M1330634

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free