Of Like Mind: The (Mostly) Similar Mentalizing of Robots and Humans

28Citations
Citations of this article
25Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Mentalizing is the process of inferencing others’ mental states and it contributes to an inferential system known as Theory of Mind (ToM)—a system that is critical to human interactions as it facilitates sense-making and the prediction of future behaviors. As technological agents like social robots increasingly exhibit hallmarks of intellectual and social agency—and are increasingly integrated into contemporary social life—it is not yet fully understood whether humans hold ToM for such agents. To build on extant research in this domain, five canonical tests that signal implicit mentalizing (white lie detection, intention inferencing, facial affect interpretation, vocal affect interpretation, and false-belief detection) were conducted for an agent (anthropomorphic or machinic robots, or a human) through video-presented (Study 1) and physically copresent interactions (Study 2). Findings suggest that mentalizing tendencies for robots and humans are more alike than different; however, the use of nonliteral language, copresent interactivity, and reliance on agent-class heuristics may reduce tendencies to mentalize robots.

Cite

CITATION STYLE

APA

Banks, J. (2021). Of Like Mind: The (Mostly) Similar Mentalizing of Robots and Humans. Technology, Mind, and Behavior, 1(2). https://doi.org/10.1037/tmb0000025

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free