Many evolutionary algorithms are designed to solve blackbox multi-objective optimization problems (MOPs) using stochastic operators, where neither the form nor the gradient information of the problem is accessible. In some real-world applications, e.g. surrogatebased global optimization, the gradient of the objective function is accessible. In this case, it is straightforward to use a gradient-based multiobjective optimization algorithm to achieve fast convergence speed and the stability of the solution. In a relatively recent approach, the hypervolume indicator gradient in the decision space is derived, which paves the way for the method for maximizing the hypervolume indicator of a fixed size population. In this paper, several mechanisms which originated in the field of evolutionary computation are proposed to make this gradient ascent method applicable. Specifically, the well-known non-dominated sorting is used to help steering the dominated points. The principle of the so-called cumulative step-size control that is originally proposed for evolution strategies is adapted to control the step-size dynamically. The resulting algorithm is called Hypervolume Indicator Gradient Ascent Multi-objective Optimization (HIGA-MO). The proposed algorithm is tested on ZDT problems and its performance is compared to other methods of moving the dominated points as well as to some evolutionary multi-objective optimization algorithms that are commonly used.
CITATION STYLE
Wang, H., Deutz, A., Bäck, T., & Emmerich, M. (2017). Hypervolume indicator gradient ascent multi-objective optimization. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10173 LNCS, pp. 654–669). Springer Verlag. https://doi.org/10.1007/978-3-319-54157-0_44
Mendeley helps you to discover research relevant for your work.