The Illusion of Empathy? Notes on Displays of Emotion in Human-Computer Interaction

4Citations
Citations of this article
56Readers
Mendeley users who have this article in their library.

Abstract

From ELIZA to Alexa, Conversational Agents (CAs) have been deliberately designed to elicit or project empathy. Although empathy can help technology better serve human needs, it can also be deceptive and potentially exploitative. In this work, we characterize empathy in interactions with CAs, highlighting the importance of distinguishing evocations of empathy between two humans from ones between a human and a CA. To this end, we systematically prompt CAs backed by large language models (LLMs) to display empathy while conversing with, or about, 65 distinct human identities, and also compare how different LLMs display or model empathy. We find that CAs make value judgments about certain identities, and can be encouraging of identities related to harmful ideologies (e.g., Nazism and xenophobia). Moreover, a computational approach to understanding empathy reveals that despite their ability to display empathy, CAs do poorly when interpreting and exploring a user's experience, contrasting with their human counterparts.

Cite

CITATION STYLE

APA

Cuadra, A., Wang, M., Stein, L. A., Jung, M. F., Dell, N., Estrin, D., & Landay, J. A. (2024). The Illusion of Empathy? Notes on Displays of Emotion in Human-Computer Interaction. In Conference on Human Factors in Computing Systems - Proceedings. Association for Computing Machinery. https://doi.org/10.1145/3613904.3642336

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free