Abstract
A key challenge for planning systems in real-time multi-agent domains is to search in large action spaces to decide an agent's next action. Previous works showed that handcrafted action abstractions allow planning systems to focus their search on a subset of promising actions. In this paper we show that the problem of generating action abstractions can be cast as a problem of selecting a subset of pure strategies from a pool of options. We model the selection of a subset of pure strategies as a two-player game in which the strategy set of the players is the powerset of the pool of options-we call this game the subset selection game. We then present an evolutionary algorithm for solving such a game. Empirical results on small matches of µRTS show that our evolutionary approach is able to converge to a Nash equilibrium for the subset selection game. Also, results on larger matches show that search algorithms using action abstractions derived by our evolutionary approach are able to substantially outperform all state-of-the-art planning systems tested.
Cite
CITATION STYLE
Mariño, J. R. H., Moraes, R. O., Toledo, C., & Lelis, L. H. S. (2019). Evolving action abstractions for real-time planning in extensive-form games. In 33rd AAAI Conference on Artificial Intelligence, AAAI 2019, 31st Innovative Applications of Artificial Intelligence Conference, IAAI 2019 and the 9th AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019 (pp. 2330–2337). AAAI Press. https://doi.org/10.1609/aaai.v33i01.33012330
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.