"Using Justifications to Mitigate Loss in Human Trust when Robots Perform Norm - Violating and Deceptive Behaviors

6Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Robots are increasingly being introduced in environments that require intimate and sensitive interactions with humans, ranging from robot caretakers in senior living facilities to medical assistants in hospitals, and as team members embedded in military operations. For robots to be trusted and accepted in interactions with humans, they must be aware of, follow, and prioritize the norms of the communities in which they will operate. This line of research aims to examine human perceptions of justifications presented by robots that exhibit norm-violating and deceptive behaviors. Across two studies,we examined human trust and moral blame ratings of robots violating social norms and the effects of justifications in mitigating initial perceptions. We aim to expand our research to emphasize deceptive robotic acts - providing quantifiable evidence for the "deception objection" debate in social robotics literature.

Cite

CITATION STYLE

APA

Rosero, A. (2023). "Using Justifications to Mitigate Loss in Human Trust when Robots Perform Norm - Violating and Deceptive Behaviors. In ACM/IEEE International Conference on Human-Robot Interaction (pp. 766–768). IEEE Computer Society. https://doi.org/10.1145/3568294.3579979

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free