Many guidelines have been developed for simulations in general or for agent-based models which support artificial societies. When applying such guidelines to examine existing practices, assessment studies are limited by the artifacts released by modelers. Although code is the final product defining an artificial society, 90% of the code produced is not released hence previous assessments necessarily focused on higher-level items such as conceptual design or validation. We address this gap by collecting 338 artificial societies from two hosting platforms, CoMSES/OpenABM and GitHub. An innovation of our approach is the use of software engineering techniques to automatically examine the models with respect to items such as commenting the code, using libraries, or dividing code into functions. We found that developers of artificial societies code the decision-making of their agents from scratch in every model, despite the existence of several libraries that could be used as building blocks.
CITATION STYLE
Vendome, C., Rao, D. M., & Giabbanelli, P. J. (2020). How do Modelers Code Artificial Societies? Investigating Practices and Quality of Netlogo Codes from Large Repositories. In Proceedings of the 2020 Spring Simulation Conference, SpringSim 2020. Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.22360/SpringSim.2020.HSAA.007
Mendeley helps you to discover research relevant for your work.