We investigate how reinforcement learning can be used to train level-designing agents. This represents a new approach to procedural content generation in games, where level design is framed as a game, and the content generator itself is learned. By seeing the design problem as a sequential task, we can use reinforcement learning to learn how to take the next action so that the expected final level quality is maximized. This approach can be used when few or no examples exist to train from, and the trained generator is very fast. We investigate three different ways of transforming two-dimensional level design problems into Markov decision processes, and apply these to three game environments.
CITATION STYLE
Khalifa, A., Bontrager, P., Earle, S., & Togelius, J. (2020). PCGRL: Procedural content generation via reinforcement learning. In Proceedings of the 16th AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, AIIDE 2020 (pp. 95–101). The AAAI Press. https://doi.org/10.1609/aiide.v16i1.7416
Mendeley helps you to discover research relevant for your work.