Automatic Verbal Depiction of a Brick Assembly for a Robot Instructing Humans

5Citations
Citations of this article
25Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Verbal and nonverbal communication skills are essential for human-robot interaction, in particular when the agents are involved in a shared task. We address the specific situation where the robot is the only agent knowing about both the plan and the goal of the task, and has to instruct the human partners. The case study is a brick assembly. We here describe a multilayered verbal depictor whose semantic, syntactic, and lexical settings have been collected and evaluated via crowdsourcing. One crowdsourced experiment involves a robot-instructed pick-and-place task. We show that implicitly referring to achieved subgoals (stairs, pillars, etc) increases the performance of human partners.

Cite

CITATION STYLE

APA

Younes, R., Bailly, G., Pellier, D., & Elisei, F. (2022). Automatic Verbal Depiction of a Brick Assembly for a Robot Instructing Humans. In SIGDIAL 2022 - 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue, Proceedings of the Conference (pp. 159–171). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.sigdial-1.17

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free