Multi-objective equivalent random search

6Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper introduces a new metric vector for assessing the performance of different multi-objective algorithms, relative to the range of performance expected from a random search. The metric requires an ensemble of repeated trials to be performed, reducing the chance of overly favourable results. The random search baseline for the function-under-test may be either analytic, or created from a Monte-Carlo process: thus the metric is repeatable and accurate. The metric allows both the median and worst performance of different algorithms to be compared directly, and scales well with high-dimensional many-objective problems. The metric quantifies and is sensitive to the distance of the solutions to the Pareto set, the distribution of points across the set, and the repeatability of the trials. Both the Monte-Carlo and closed form analysis methods will provide accurate analytic confidence intervals on the observed results. © Springer-Verlag Berlin Heidelberg 2006.

Cite

CITATION STYLE

APA

Hughes, E. J. (2006). Multi-objective equivalent random search. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4193 LNCS, pp. 463–472). Springer Verlag. https://doi.org/10.1007/11844297_47

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free