Self-adaptive inexact proximal point methods

24Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We propose a class of self-adaptive proximal point methods suitable for degenerate optimization problems where multiple minimizers may exist, or where the Hessian may be singular at a local minimizer. If the proximal regularization parameter has the form μ(x) = β ∥ ▽ f(x) ∥η where η [0,2) and β>0 is a constant, we obtain convergence to the set of minimizers that is linear for η=0 and β sufficiently small, superlinear for η (0,1), and at least quadratic for η [1,2). Two different acceptance criteria for an approximate solution to the proximal problem are analyzed. These criteria are expressed in terms of the gradient of the proximal function, the gradient of the original function, and the iteration difference. With either acceptance criterion, the convergence results are analogous to those of the exact iterates. Preliminary numerical results are presented using some ill-conditioned CUTE test problems. © 2007 Springer Science+Business Media, LLC.

Cite

CITATION STYLE

APA

Hager, W. W., & Zhang, H. (2008). Self-adaptive inexact proximal point methods. Computational Optimization and Applications, 39(2), 161–181. https://doi.org/10.1007/s10589-007-9067-3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free