Autonomy Acceptance Model (AAM): The Role of Autonomy and Risk in Security Robot Acceptance

0Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

The rapid deployment of security robots across our society calls for further examination of their acceptance. This study explored human acceptance of security robots by theoretically extending the technology acceptance model to include the impact of autonomy and risk. To accomplish this, an online experiment involving 236 participants was conducted. Participants were randomly assigned to watch a video introducing a security robot operating at an autonomy level of low, moderate, or high, and presenting either a low or high risk to humans. This resulted in a 3 (autonomy) × 2 (risk) between-subjects design. The findings suggest that increased perceived usefulness, perceived ease of use, and trust enhance acceptance, while higher robot autonomy tends to decrease acceptance. Additionally, the physical risk associated with security robots moderates the relationship between autonomy and acceptance. Based on these results, this paper offer recommendations for future research on security robots.

Cite

CITATION STYLE

APA

Ye, X., Jo, W., Ali, A., Bhatti, S. C., Esterwood, C., Kassie, H. A., & Robert, L. P. (2024). Autonomy Acceptance Model (AAM): The Role of Autonomy and Risk in Security Robot Acceptance. In ACM/IEEE International Conference on Human-Robot Interaction (pp. 840–849). IEEE Computer Society. https://doi.org/10.1145/3610977.3635005

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free