A big challenge for creating human-level game AI is building agents capable of operating in imperfect information environments. In real-time strategy games the technological progress of an opponent and locations of enemy units are partially observable. To overcome this limitation, we explore a particle-based approach for estimating the location of enemy units that have been encountered. We represent state estimation as an optimization problem, and automatically learn parameters for the particle model by mining a corpus of expert StarCraft replays. The particle model tracks opponent units and provides conditions for activating tactical behaviors in our StarCraft bot. Our results show that incorporating a learned particle model improves the performance of EISBot by 10% over baseline approaches. Copyright © 2011, Association for the Advancement of Artificial.
CITATION STYLE
Weber, B. G., Mateas, M., & Jhala, A. (2011). A particle model for state estimation in real-time strategy games. In Proceedings of the 7th AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, AIIDE 2011 (pp. 103–108). https://doi.org/10.1609/aiide.v7i1.12424
Mendeley helps you to discover research relevant for your work.