Genetic algorithms (GAs) are a subclass of evolutionary algorithms often used to solve difficult combinatorial or non-linear problems. However, most GAs have to be configured for a particular problem type, and even then, the performance depends on many hyperparameters and reproduction operators. In this paper, a reinforcement learning (RL) approach is designed to adaptively set parameters for a GA used for solving a Capacitated Vehicle Routing Problem (CVRP). An RL agent interacts with the GA environment by taking actions that affect the parameters governing its evolution, starting from a given initial point. The results obtained by this RL-GA procedure are then compared with those obtained by alternate static parameter values. For a set of benchmark problems, the solutions obtained by the RL-GA are better (up to 11% improvement) than those obtained for the set as compared to the alternate approach. Examination of the results shows that the RL-GA maintains great diversity in the population pool, especially as the iterations accrue. Computational runs are traced to show how the RL agent learns from population diversity and solution improvements over time, leading to near-optimal solutions.
CITATION STYLE
Quevedo, J., Abdelatti, M., Imani, F., & Sodhi, M. (2021). Using reinforcement learning for tuning genetic algorithms. In GECCO 2021 Companion - Proceedings of the 2021 Genetic and Evolutionary Computation Conference Companion (pp. 1503–1507). Association for Computing Machinery, Inc. https://doi.org/10.1145/3449726.3463203
Mendeley helps you to discover research relevant for your work.