Recent years have witnessed growing interest in data-driven approaches to interactive narrative planning and drama management. Reinforcement learning techniques show particular promise because they can automatically induce and refine models for tailoring game events by optimizing reward functions that explicitly encode interactive narrative experiences' quality. Due to the inherently subjective nature of interactive narrative experience, designing effective reward functions is challenging. In this paper, we investigate the impacts of alternate formulations of reward in a reinforcement learning-based interactive narrative planner for the CRYSTAL ISLAND game environment. We formalize interactive narrative planning as a modular reinforcementlearning (MRL) problem. By decomposing interactive narrative planning into multiple independent sub-problems, MRL enables efficient induction of interactive narrative policies directly from a corpus of human players' experience data. Empirical analyses suggest that interactive narrative policies induced with MRL are likely to yield better player outcomes than heuristic or baseline policies. Furthermore, we observe that MRL-based interactive narrative planners are robust to alternate reward discount parameterizations.
CITATION STYLE
Rowe, J. P., Mott, B. W., & Lester, J. C. (2014). Optimizing player experience in interactive narrative planning: A modular reinforcement learning approach. In Proceedings of the 10th AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, AIIDE 2014 (pp. 160–166). AAAI press. https://doi.org/10.1609/aiide.v10i1.12733
Mendeley helps you to discover research relevant for your work.