Global continuous optimization with error bound and fast convergence

15Citations
Citations of this article
41Readers
Mendeley users who have this article in their library.

Abstract

This paper considers global optimization with a black-box unknown objective function that can be non-convex and non-differentiable. Such a difficult optimization problem arises in many real-world applications, such as parameter tuning in machine learning, engineering design problem, and planning with a complex physics simulator. This paper proposes a new global optimization algorithm, called Locally Oriented Global Optimization (LOGO), to aim for both fast convergence in practice and finite-time error bound in theory. The advantage and usage of the new algorithm are illustrated via theoretical analysis and an experiment conducted with 11 benchmark test functions. Further, we modify the LOGO algorithm to specifically solve a planning problem via policy search with continuous state/action space and long time horizon while maintaining its finite-time error bound. We apply the proposed planning method to accident management of a nuclear power plant. The result of the application study demonstrates the practical utility of our method.

Cite

CITATION STYLE

APA

Kawaguchi, K., Maruyama, Y., & Zheng, X. (2016). Global continuous optimization with error bound and fast convergence. Journal of Artificial Intelligence Research, 56, 153–195. https://doi.org/10.1613/jair.4742

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free