A modular reinforcement learning framework for interactive narrative planning

4Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

A key functionality provided by interactive narrative systems is narrative adaptation: tailoring story experiences in response to users' actions and needs. We present a data-driven framework for dynamically tailoring events in interactive narratives using modular reinforcement learning. The framework involves decomposing an interactive narrative into multiple concurrent sub-problems, formalized as adaptable event sequences (AESs). Each AES is modeled as an independent Markov decision process (MDP). Policies for each MDP are induced using a corpus of user interaction data from an interactive narrative system with exploratory narrative adaptation policies. Rewards are computed based on users' experiential outcomes. Conflicts between multiple policies are handled using arbitration procedures. In addition to introducing the framework, we describe a corpus of user interaction data from a testbed interactive narrative, CRYSTAL ISLAND, for inducing narrative adaptation policies. Empirical findings suggest that the framework can effectively shape users' interactive narrative experiences. Copyright © 2013, Association for the Advancement of Artificial Intelligence. All rights reserved.

Cite

CITATION STYLE

APA

Rowe, J. P., & Lester, J. C. (2013). A modular reinforcement learning framework for interactive narrative planning. In AAAI Workshop - Technical Report (Vol. WS-13-21, pp. 57–63). AI Access Foundation. https://doi.org/10.1609/aiide.v9i4.12636

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free