Recognizing Unfamiliar Gestures for Human-Robot Interaction Through Zero-Shot Learning

7Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Human communication is highly multimodal, including speech, gesture, gaze, facial expressions, and body language. Robots serving as human teammates must act on such multimodal communicative inputs from humans, even when the message may not be clear from any single modality. In this paper, we explore a method for achieving increased understanding of complex, situated communications by leveraging coordinated natural language, gesture, and context. These three problems have largely been treated separately, but unified consideration of them can yield gains in comprehension [1, 12].

Cite

CITATION STYLE

APA

Thomason, W., & Knepper, R. A. (2017). Recognizing Unfamiliar Gestures for Human-Robot Interaction Through Zero-Shot Learning. In Springer Proceedings in Advanced Robotics (Vol. 1, pp. 841–852). Springer Science and Business Media B.V. https://doi.org/10.1007/978-3-319-50115-4_73

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free