Study on user-generated 3D gestures for video conferencing system with see-through head mounted display

0Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

As video conferencing systems transition to using head-mounted displays (HMD), non-contacting (3D) hand gestures are likely to replace conventional input devices by providing more efficient interactions with less cost. This paper presents the design of an experimental video conferencing system with optical see-through HMD, Leap Motion hand tracker, and RGB cameras. Both the skeleton-based dynamic hand gesture recognition and ergonomic-based gesture lexicon design were studied. The proposed gesture recognition algorithm fused hand shape and hand direction feature and used Temporal Pyramid to obtain a high dimension feature and predicted the gesture classification through linear SVM machine learning. Subjects (N = 16) self-generated different hand gestures for 25 different tasks related to video conferencing and object manipulation and rated gestures on ease of making the gesture, match to the command, and arm fatigue. Based on these outcomes, a gesture lexicon is proposed for controlling a video conferencing system and for manipulating virtual objects.

Cite

CITATION STYLE

APA

Li, G., Liu, Y., Wang, Y., & David, R. (2018). Study on user-generated 3D gestures for video conferencing system with see-through head mounted display. In Communications in Computer and Information Science (Vol. 875, pp. 605–615). Springer Verlag. https://doi.org/10.1007/978-981-13-1702-6_60

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free