Anytime Performance Assessment in Blackbox Optimization Benchmarking

22Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We present concepts and recipes for the anytime performance assessment when benchmarking optimization algorithms in a blackbox scenario. We consider runtime - oftentimes measured in the number of blackbox evaluations needed to reach a target quality - to be a universally measurable cost for solving a problem. Starting from the graph that depicts the solution quality versus runtime, we argue that runtime is the only performance measure with a generic, meaningful, and quantitative interpretation. Hence, our assessment is solely based on runtime measurements. We discuss proper choices for solution quality indicators in single- and multi-objective optimization, as well as in the presence of noise and constraints. We also discuss the choice of the target values, budget-based targets, and the aggregation of runtimes by using simulated restarts, averages, and empirical cumulative distributions which generalize convergence graphs of single runs. The presented performance assessment is to a large extent implemented in the comparing continuous optimizers (COCO) platform freely available at https://github.com/numbbo/coco.

Cite

CITATION STYLE

APA

Hansen, N., Auger, A., Brockhoff, D., & Tusar, T. (2022). Anytime Performance Assessment in Blackbox Optimization Benchmarking. IEEE Transactions on Evolutionary Computation, 26(6), 1293–1305. https://doi.org/10.1109/TEVC.2022.3210897

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free