Runtime analysis of (1 + 1) evolutionary algorithm controlled with Q-learning using greedy exploration strategy on OneMAX+ZEROMAX problem

1Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

There exist optimization problems with the target objective, which is to be optimized, and several extra objectives. The extra objectives may or may not be helpful in optimization process in terms of the number of objective evaluations necessary to reach an optimum of the target objective. OneMax+ZeroMax is a previously proposed benchmark optimization problem where the target objective is OneMax and a single extra objective is ZeroMax, which is equal to the number of zero bits in the bit vector. This is an example of a problem where extra objectives are not good, and objective selection methods should ignore the extra objectives. The EA+RL method is a method which selects objectives to be optimized by evolutionary algorithms (EA) using reinforcement learning (RL). Previously it was shown that it runs in Θ(N logN) on OneMax+ZeroMax when configured to use the randomized local search algorithm and the Q-learning algorithm with the greedy exploration strategy. We present the runtime analysis for the case when the (1 + 1)-EA algorithm is used. It is shown that the expected running time is at most 3.12eN log N.

Cite

CITATION STYLE

APA

Antipov, D., Buzdalov, M., & Doerr, B. (2015). Runtime analysis of (1 + 1) evolutionary algorithm controlled with Q-learning using greedy exploration strategy on OneMAX+ZEROMAX problem. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9026, pp. 160–172). Springer Verlag. https://doi.org/10.1007/978-3-319-16468-7_14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free