A generic approach for accelerating stochastic zeroth-order convex optimization

4Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we propose a generic approach for accelerating the convergence of existing algorithms to solve the problem of stochastic zeroth-order convex optimization (SZCO). Standard techniques for accelerating the convergence of stochastic zeroth-order algorithms are by exploring multiple functional evaluations (e.g., two-point evaluations), or by exploiting global conditions of the problem (e.g., smoothness and strong convexity). Nevertheless, these classic acceleration techniques are necessarily restricting the applicability of newly developed algorithms. The key of our proposed generic approach is to explore a local growth condition (or called local error bound condition) of the objective function in SZCO. The benefits of the proposed acceleration technique are: (i) it is applicable to both settings with one-point evaluation and two-point evaluations; (ii) it does not necessarily require strong convexity or smoothness condition of the objective function; (iii) it yields an improvement on convergence for a broad family of problems. Empirical studies in various settings demonstrate the effectiveness of the proposed acceleration approach.

Cite

CITATION STYLE

APA

Yu, X., King, I., Lyu, M. R., & Yang, T. (2018). A generic approach for accelerating stochastic zeroth-order convex optimization. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2018-July, pp. 3040–3046). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2018/422

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free