General Hölder Smooth Convergence Rates Follow from Specialized Rates Assuming Growth Bounds

1Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Often in the analysis of first-order methods for both smooth and nonsmooth optimization, assuming the existence of a growth/error bound or KL condition facilitates much stronger convergence analysis. Hence, separate analysis is typically needed for the general case and for the growth bounded cases. We give meta-theorems for deriving general convergence rates from those assuming a growth lower bound. Applying this simple but conceptually powerful tool to the proximal point, subgradient, bundle, dual averaging, gradient descent, Frank–Wolfe and universal accelerated methods immediately recovers their known convergence rates for general convex optimization problems from their specialized rates. New convergence results follow for bundle methods, dual averaging and Frank–Wolfe. Our results can lift any rate based on Hölder continuous gradients and Hölder growth bounds. Moreover, our theory provides simple proofs of optimal convergence lower bounds under Hölder growth from textbook examples without growth bounds.

Cite

CITATION STYLE

APA

Grimmer, B. (2023). General Hölder Smooth Convergence Rates Follow from Specialized Rates Assuming Growth Bounds. Journal of Optimization Theory and Applications, 197(1), 51–70. https://doi.org/10.1007/s10957-023-02178-4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free