Abstract
Teaching robotic systems how to carry out a task in a collaborative environment still presents a challenge. This is because replicating natural human-to-human interaction requires the availability of interaction modalities that allow conveying complex information. Speech, gestures, gaze-based interactions as well as directly guiding a robotic system count towards such modalities that yield the potential to enable smooth multimodal human-robot interaction. This paper presents a conceptual approach for multimodally teaching a robotic system how to pick-and-place an object, one of the fundamental tasks not only in robotics, but in everyday life. By establishing task and dialogue model separately, we aim to split robot/task logic from interaction logic and to achieve modality independence for the teaching interaction. Finally, we elaborate on an experimental implementation of our models for multimodally teaching a UR-10 robot arm how to pick-and-place an object.
Author supplied keywords
Cite
CITATION STYLE
Kleer, N., Rekrut, M., Wolter, J., Schwartz, T., & Feld, M. (2023). A Multimodal Teach-in Approach to the Pick-and-Place Problem in Human-Robot Collaboration. In ACM/IEEE International Conference on Human-Robot Interaction (pp. 81–85). IEEE Computer Society. https://doi.org/10.1145/3568294.3580047
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.