A coordinate gradient descent method for nonsmooth separable minimization

605Citations
Citations of this article
211Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We consider the problem of minimizing the sum of a smooth function and a separable convex function. This problem includes as special cases bound-constrained optimization and smooth optimization with l 1-regularization. We propose a (block) coordinate gradient descent method for solving this class of nonsmooth separable problems. We establish global convergence and, under a local Lipschitzian error bound assumption, linear convergence for this method. The local Lipschitzian error bound holds under assumptions analogous to those for constrained smooth optimization, e.g., the convex function is polyhedral and the smooth function is (nonconvex) quadratic or is the composition of a strongly convex function with a linear mapping. We report numerical experience with solving the l1- regularization of unconstrained optimization problems from Moré et al. in ACM Trans. Math. Softw. 7, 17-41, 1981 and from the CUTEr set (Gould and Orban in ACM Trans. Math. Softw. 29, 373-394, 2003). Comparison with L-BFGS-B and MINOS, applied to a reformulation of the l1-regularized problem as a bound-constrained optimization problem, is also reported. © 2007 Springer-Verlag.

Cite

CITATION STYLE

APA

Tseng, P., & Yun, S. (2009). A coordinate gradient descent method for nonsmooth separable minimization. Mathematical Programming, 117(1–2), 387–423. https://doi.org/10.1007/s10107-007-0170-0

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free