On the convergence of the coordinate descent method for convex differentiable minimization

396Citations
Citations of this article
143Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The coordinate descent method enjoys a long history in convex differentiable minimization. Surprisingly, very little is known about the convergence of the iterates generated by this method. Convergence typically requires restrictive assumptions such as that the cost function has bounded level sets and is in some sense strictly convex. In a recent work, Luo and Tseng showed that the iterates are convergent for the symmetric monotone linear complementarity problem, for which the cost function is convex quadratic, but not necessarily strictly convex, and does not necessarily have bounded level sets. In this paper, we extend these results to problems for which the cost function is the composition of an affine mapping with a strictly convex function which is twice differentiable in its effective domain. In addition, we show that the convergence is at least linear. As a consequence of this result, we obtain, for the first time, that the dual iterates generated by a number of existing methods for matrix balancing and entropy optimization are linearly convergent. © 1992 Plenum Publishing Corporation.

Cite

CITATION STYLE

APA

Luo, Z. Q., & Tseng, P. (1992). On the convergence of the coordinate descent method for convex differentiable minimization. Journal of Optimization Theory and Applications, 72(1), 7–35. https://doi.org/10.1007/BF00939948

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free