Effects of anthropomorphism and accountability on trust in human robot interaction

218Citations
Citations of this article
249Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper examines how people's trust and dependence on robot teammates providing decision support varies as a function of different attributes of the robot, such as perceived anthropomorphism, type of support provided by the robot, and its physical presence.We conduct a mixed-design user study with multiple robots to investigate trust, inappropriate reliance, and compliance measures in the context of a time-constrained game.We also examine how the effect of human accountability addresses errors due to over-compliance in the context of human robot interaction (HRI). This study is novel as it involves examining multiple attributes at once, thus enabling us to perform multi-way comparisons between different attributes on trust and compliance with the agent. Results from the 4x4x2x2 study show that behavior and anthropomorphism of the agent are the most significant factors in predicting the trust and compliance with the robot. Furthermore, adding a coalition-building preface, where the agent provides context to why it might make errors while giving advice, leads to an increase in trust for specific behaviors of the agent.

Cite

CITATION STYLE

APA

Natarajan, M., & Gombolay, M. (2020). Effects of anthropomorphism and accountability on trust in human robot interaction. In ACM/IEEE International Conference on Human-Robot Interaction (pp. 33–42). IEEE Computer Society. https://doi.org/10.1145/3319502.3374839

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free