Simulation-based optimization using agent-based models is typically carried out under the assumption that the gradient describing the sensitivity of the simulation output to the input cannot be evaluated directly. To still apply gradient-based optimization methods, which efficiently steer the optimization towards a local optimum, gradient estimation methods can be employed. However, many simulation runs are needed to obtain accurate estimates if the input dimension is large. Automatic differentiation (AD) is a family of techniques to compute gradients of general programs directly. Here, we explore the use of AD in the context of time-driven agent-based simulations. By substituting common discrete model elements such as conditional branching with smooth approximations, we obtain gradient information across discontinuities in the model logic. On the example of microscopic traffic models and an epidemics model, we study the fidelity and overhead of the differentiable models, as well as the convergence speed and solution quality achieved by gradient-based optimization compared to gradient-free methods. In traffic signal timing optimization problems with high input dimension, the gradient-based methods exhibit substantially superior performance. Finally, we demonstrate that the approach enables gradient-based training of neural network-controlled simulation entities embedded in the model logic.
CITATION STYLE
Andelfinger, P. (2021). Differentiable Agent-Based Simulation for Gradient-Guided Simulation-Based Optimization. In SIGSIM-PADS 2021 - Proceedings of the 2021 ACM SIGSIM Conference on Principles of Advanced Discrete Simulation (pp. 27–38). Association for Computing Machinery, Inc. https://doi.org/10.1145/3437959.3459261
Mendeley helps you to discover research relevant for your work.