Sounding Robots: Design and Evaluation of Auditory Displays for Unintentional Human-robot Interaction

9Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.

Abstract

Non-verbal communication is important in HRI, particularly when humans and robots do not need to actively engage in a task together, but rather they co-exist in a shared space. Robots might still need to communicate states such as urgency or availability, and where they intend to go, to avoid collisions and disruptions. Sounds could be used to communicate such states and intentions in an intuitive and non-disruptive way. Here, we propose a multi-layer classification system for displaying various robot information simultaneously via sound. We first conceptualise which robot features could be displayed (robot size, speed, availability for interaction, urgency, and directionality); we then map them to a set of audio parameters. The designed sounds were then evaluated in five online studies, where people listened to the sounds and were asked to identify the associated robot features. The sounds were generally understood as intended by participants, especially when they were evaluated one feature at a time, and partially when they were evaluated two features simultaneously. The results of these evaluations suggest that sounds can be successfully used to communicate robot states and intended actions implicitly and intuitively.

Cite

CITATION STYLE

APA

Orthmann, B., Leite, I., Bresin, R., & Torre, I. (2023). Sounding Robots: Design and Evaluation of Auditory Displays for Unintentional Human-robot Interaction. ACM Transactions on Human-Robot Interaction, 12(4). https://doi.org/10.1145/3611655

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free