Predicting generated story quality with quantitative measures

24Citations
Citations of this article
36Readers
Mendeley users who have this article in their library.

Abstract

The ability of digital storytelling agents to evaluate their output is important for ensuring high-quality human-agent interactions. However, evaluating stories remains an open problem. Past evaluative techniques are either model-specific— which measure features of the model but do not evaluate the generated stories —or require direct human feedback, which is resource-intensive. We introduce a number of story features that correlate with human judgments of stories and present algorithms that can measure these features. We find this approach results in a proxy for human-subject studies for researchers evaluating story generation systems.

Cite

CITATION STYLE

APA

Purdy, C., Wang, X., He, L., & Riedl, M. (2018). Predicting generated story quality with quantitative measures. In Proceedings of the 14th AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, AIIDE 2018 (pp. 95–101). AAAI press. https://doi.org/10.1609/aiide.v14i1.13021

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free