Improved nsga‐iii with second‐order difference random strategy for dynamic multi‐objective optimization

12Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

Most real‐world problems that have two or three objectives are dynamic, and the environment of the problems may change as time goes on. For the purpose of solving dynamic multi‐objec-tive problems better, two proposed strategies (second‐order difference strategy and random strategy) were incorporated with NSGA‐III, namely SDNSGA‐III. When the environment changes in SDNSGA‐III, the second‐order difference strategy and random strategy are first used to improve the individuals in the next generation population, then NSGA‐III is employed to optimize the individuals to obtain optimal solutions. Our experiments were conducted with two primary objectives. The first was to test the values of the metrics mean inverted generational distance (MIGD), mean generational distance (MGD), and mean hyper volume (MHV) on the test functions (Fun1 to Fun6) via the proposed algorithm and the four state‐of‐the‐art algorithms. The second aim was to compare the metrics’ value of NSGA‐III with single strategy and SDNSGA‐III, proving the efficiency of the two strategies in SDNSGA‐III. The comparative data obtained from the experiments demonstrate that SDNSGA‐III has good convergence and diversity compared with four other evolutionary algorithms. What is more, the efficiency of second‐order difference strategy and random strategy was also analyzed in this paper.

Cite

CITATION STYLE

APA

Zhang, H., Wang, G. G., Dong, J., & Gandomi, A. H. (2021). Improved nsga‐iii with second‐order difference random strategy for dynamic multi‐objective optimization. Processes, 9(6). https://doi.org/10.3390/pr9060911

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free