Random embedding has been applied with empirical success to large-scale black-box optimization problems with low effective dimensions. This paper proposes the EMBEDDEDHUNTER algorithm, which incorporates the technique in a hierarchical stochastic bandit setting, following the optimism in the face of uncertainty principle and breaking away from the multiple-run framework in which random embedding has been conventionally applied similar to stochastic black-box optimization solvers. Our proposition is motivated by the bounded mean variation in the objective value for a low-dimensional point projected randomly into the decision space of Lipschitz-continuous problems. In essence, the EMBEDDEDHUNTER algorithm expands optimistically a partitioning tree over a low-dimensional - equal to the effective dimension of the problem - search space based on a bounded number of random embeddings of sampled points from the low-dimensional space. In contrast to the probabilistic theoretical guarantees of multiple-run random-embedding algorithms, the finite-time analysis of the proposed algorithm presents a theoretical upper bound on the regret as a function of the algorithm's number of iterations. Furthermore, numerical experiments were conducted to validate its performance. The results show a clear performance gain over recently proposed random embedding methods for large-scale problems, provided the intrinsic dimensionality is low.
CITATION STYLE
Al-Dujaili, A., & Suresh, S. (2017). Embedded bandits for large-scale black-box optimization. In 31st AAAI Conference on Artificial Intelligence, AAAI 2017 (pp. 758–764). AAAI press. https://doi.org/10.1609/aaai.v31i1.10650
Mendeley helps you to discover research relevant for your work.