Enhancements for Monte-Carlo Tree Search in Ms Pac-Man

19Citations
Citations of this article
32Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper enhancements for the Monte-Carlo Tree Search (MCTS) framework are investigated to play Ms Pac-Man. MCTS is used to find an optimal path for an agent at each turn, determining the move to make based on randomised simulations. Ms Pac-Man is a real-time arcade game, in which the protagonist has several independent goals but no conclusive terminal state. Unlike games such as Chess or Go there is no state in which the player wins the game. Furthermore, the Pac-Man agent has to compete with a range of different ghost agents, hence limited assumptions can be made about the opponent's behaviour. In order to expand the capabilities of existing MCTS agents, five enhancements are discussed: 1) a variable depth tree, 2) playout strategies for the ghost-team and Pac-Man, 3) including long-term goals in scoring, 4) endgame tactics, and 5) a Last-Good-Reply policy for memorising rewarding moves during playouts. An average performance gain of 40,962 points, compared to the average score of the top scoring Pac-Man agent during the CIG'11, is achieved by employing these methods. © 2012 IEEE.

Cite

CITATION STYLE

APA

Pepels, T., & Winands, M. H. M. (2012). Enhancements for Monte-Carlo Tree Search in Ms Pac-Man. In 2012 IEEE Conference on Computational Intelligence and Games, CIG 2012 (pp. 265–272). https://doi.org/10.1109/CIG.2012.6374165

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free