Global rates of convergence for nonconvex optimization on manifolds

157Citations
Citations of this article
76Readers
Mendeley users who have this article in their library.

Abstract

We consider the minimization of a cost function f on a manifold M using Riemannian gradient descent and Riemannian trust regions (RTR). We focus on satisfying necessary optimality conditions within a tolerance ε. Specifically, we show that, under Lipschitz-type assumptions on the pullbacks of f to the tangent spaces of M, both of these algorithms produce points with Riemannian gradient smaller than ε in O1/ε2 iterations. Furthermore, RTR returns a point where also the Riemannian Hessian's least eigenvalue is larger than −ε in O1/ε3 iterations. There are no assumptions on initialization. The rates match their (sharp) unconstrained counterparts as a function of the accuracy ε (up to constants) and hence are sharp in that sense. These are the first deterministic results for global rates of convergence to approximate first- and second-order Karush-Kuhn-Tucker points on manifolds. They apply in particular for optimization constrained to compact submanifolds of Rn, under simpler assumptions.

Cite

CITATION STYLE

APA

Boumal, N., Absil, P. A., & Cartis, C. (2019). Global rates of convergence for nonconvex optimization on manifolds. IMA Journal of Numerical Analysis, 39(1), 1–33. https://doi.org/10.1093/imanum/drx080

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free