Human communication is highly multimodal, including speech, gesture, gaze, facial expressions, and body language. Robots serving as human teammates must act on such multimodal communicative inputs from humans, even when the message may not be clear from any single modality. In this paper, we explore a method for achieving increased understanding of complex, situated communications by leveraging coordinated natural language, gesture, and context. These three problems have largely been treated separately, but unified consideration of them can yield gains in comprehension [1, 12].
CITATION STYLE
Thomason, W., & Knepper, R. A. (2017). Recognizing Unfamiliar Gestures for Human-Robot Interaction Through Zero-Shot Learning. In Springer Proceedings in Advanced Robotics (Vol. 1, pp. 841–852). Springer Science and Business Media B.V. https://doi.org/10.1007/978-3-319-50115-4_73
Mendeley helps you to discover research relevant for your work.