GPU-based parallelization of agent-based modeling (ABM) has been highlighted for the last decade to address its computational needs for scalable and long-run simulations in practical use. From the software productivity viewpoint, model designers would prefer general ABM frameworks for GPU parallelization. However, having transited from single-node or cluster-computing platforms to GPUs, most general ABM frameworks maintain their APIs at the script level, delegate only a limited number of agent functions to GPUs, and copy agent data between host and device memory for each function call, which cannot ease agent description nor maximize GPU parallelism. To respond to these problems, we have developed the MASS (Multi-Agent Spatial Simulation) CUDA library that allows users to describe all simulation models in CUDA C++, to automate entire model parallelization at GPU, and to minimize host-to-device memory transfer. However, our straightforward implementation did not improve the parallel performance. Focusing on the data-parallel computation with GPU, we examined MASS overheads in GPU memory usage and developed optimization techniques that reduce kernel context switches, optimize kernel configuration, use constant memory, and reduce overheads incurred by agent population, migration, and termination. These techniques improved Heat2D and SugarScape’s execution performance, respectively 3.9 times and 5.8 times faster than the corresponding C++ sequential programs. This paper gives details of our GPU parallelization techniques for multi-agent simulation and demonstrates the MASS CUDAs performance improvements.
CITATION STYLE
Kosiachenko, L., Hart, N., & Fukuda, M. (2019). MASS CUDA: A General GPU Parallelization Framework for Agent-Based Models. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11523 LNAI, pp. 139–152). Springer Verlag. https://doi.org/10.1007/978-3-030-24209-1_12
Mendeley helps you to discover research relevant for your work.