Consider the problem of minimizing the sum of two convex functions, one being smooth and the other non-smooth. In this paper, we introduce a general class of approximate proximal splitting (APS) methods for solving such minimization problems. Methods in the APS class include many well-known algorithms such as the proximal splitting method, the block coordinate descent method (BCD), and the approximate gradient projection methods for smooth convex optimization. We establish the linear convergence of APS methods under a local error bound assumption. Since the latter is known to hold for compressive sensing and sparse group LASSO problems, our analysis implies the linear convergence of the BCD method for these problems without strong convexity assumption. © 2014 Operations Research Society of China, Periodicals Agency of Shanghai University, and Springer-Verlag Berlin Heidelberg.
CITATION STYLE
Kadkhodaie, M., Sanjabi, M., & Luo, Z. Q. (2014). On the Linear Convergence of the Approximate Proximal Splitting Method for Non-smooth Convex Optimization. Journal of the Operations Research Society of China, 2(2), 123–141. https://doi.org/10.1007/s40305-014-0047-x
Mendeley helps you to discover research relevant for your work.