The mathematical runtime analysis of evolutionary algorithms traditionally regards the time an algorithm needs to find a solution of a certain quality when initialized with a random population. In practical applications it may be possible to guess solutions that are better than random ones. We start a mathematical runtime analysis for such situations. We observe that different algorithms profit to a very different degree from a better initialization. We also show that the optimal parameterization of the algorithm can depend strongly on the quality of the initial solutions. To overcome this difficulty, self-adjusting and randomized heavy-tailed parameter choices can be profitable. Finally, we observe a larger gap between the performance of the best evolutionary algorithm we found and the corresponding black-box complexity. This could suggest that evolutionary algorithms better exploiting good initial solutions are still to be found. These first findings stem from analyzing the performance of the $$(1+1)$$ evolutionary algorithm and the static, self-adjusting, and heavy-tailed $$(1 + (\lambda,\lambda ))$$ GA on the OneMax benchmark, but we are optimistic that the question how to profit from good initial solutions is interesting beyond these first examples.
CITATION STYLE
Antipov, D., Buzdalov, M., & Doerr, B. (2020). First steps towards a runtime analysis when starting with a good solution. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12270 LNCS, pp. 560–573). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-58115-2_39
Mendeley helps you to discover research relevant for your work.