Assessing believability

40Citations
Citations of this article
63Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We discuss what it means for a non-player character (NPC) to be believable or human-like, and how we can accurately assess believability. We argue that participatory observation, where the human assessing believability takes part in the game, is prone to distortion effects. For many games, a fairer (or at least complementary) assessment might be made by an external observer that does not participate in the game, through comparing and ranking the performance of human and non-human agents playing a game. This assessment philosophy was embodied in the Turing Test track of the recent Mario AI Championship, where non-expert bystanders evaluated the human-likeness of several agents and humans playing a version of Super Mario Bros. We analyze the results of this competition. Finally, we discuss the possibilities for forming models of believability and of maximizing believability through adjusting game content rather than NPC control logic.

Cite

CITATION STYLE

APA

Togelius, J., Yannakakis, G. N., Karakovskiy, S., & Shaker, N. (2012). Assessing believability. In Believable Bots: Can Computers Play Like People? (Vol. 9783642323232, pp. 215–230). Springer-Verlag Berlin Heidelberg. https://doi.org/10.1007/978-3-642-32323-2_9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free