Nonconvex Sparse Regularization and Splitting Algorithms

  • Chartrand R
  • Yin W
N/ACitations
Citations of this article
16Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Nonconvex regularization functions such as the p quasinorm (0 < p < 1) can recover sparser solutions from fewer measurements than the convex 1 regular-ization function. They have been widely used for compressive sensing and signal processing. This chapter briefly reviews the development of algorithms for noncon-vex regularization. Because nonconvex regularization usually has different regular-ity properties from other functions in a problem, we often apply operator splitting (forward-backward splitting) to develop algorithms that treat them separately. The treatment on nonconvex regularization is via the proximal mapping. We also review another class of coordinate descent algorithms that work for both convex and nonconvex functions. They split variables into small, possibly parallel, subproblems, each of which updates a variable while fixing others. Their theory and applications have been recently extended to cover nonconvex regularization func-tions, which we review in this chapter. Finally, we also briefly mention an ADMM-based algorithm for nonconvex reg-ularization, as well as the recent algorithms for so-called nonconvex sort 1 and 1 − 2 minimization. 1 Early history of nonconvex regularization for sparsity The attempt to compute a sparse solution of a problem (such as a linear system of equations) by minimizing a nonconvex penalty function can be traced back at least to Leahy and Jeffs [31], who used a simplex algorithm (essentially a nonlinear version of linear programming) to minimize the p norm 1 subject to a linear constraint. They describe the algorithm as similar to one of Barrodale and Roberts [3], where p norm minimization is considered in a different context. The next algorithmic development came from Gorodnitsky and Rao, named FO-CUSS (for FOCal Underdetermined System Solver) [25]. In fact, the approach was a much older method, iteratively reweighted least squares (IRLS) [30], applied to 0 norm minimization. This was extended to general p minimization by Rao and Kreutz-Delgado [45]. Global convergence was erroneously claimed in [26], based on Zangwill's Global Convergence Theorem [65], which only provides that subse-quential limits are local minima. Attention to nonconvex regularization for sparsity was next spurred by the de-velopment of compressive sensing [12, 23], which mostly featured 1 minimization. Generalization to p minimization with p < 1 was carried out by Chartrand, ini-tially with a projected gradient algorithm [15], followed by an IRLS approach with Yin [18]. A crucial difference between this work and the earlier FOCUSS work was the use of iterative mollification, where |x| p was replaced by (x 2 + ε n) p/2 for a sequence (ε n) converging geometrically to zero. This approach, reminiscent of the graduated nonconvexity approach of Blake and Zisserman [7] resulted in far bet-ter signal reconstruction results, seemingly due to much better avoidance of local minima. A similar approach was developed independently by Mohimani et al. [39], except with iterative mollification of the 0 norm. In addition,Can es et al. [13] de-veloped a reweighted 1 algorithm, using a fixed mollifying ε. If the same iterative mollification approach is used, empirical evidence suggests that reweighted 1 and IRLS are equally effective. 2 Forward-backward splitting and thresholdings

Cite

CITATION STYLE

APA

Chartrand, R., & Yin, W. (2016). Nonconvex Sparse Regularization and Splitting Algorithms (pp. 237–249). https://doi.org/10.1007/978-3-319-41589-5_7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free