Virtual hands in VR: Motion capture, synthesis, and perception

12Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We use our hands every day: to grasp a cup of coffee, write text on a keyboard, or signal that we are about to say something important. We use our hands to interact with our environment and to help us communicate with each other without thinking about it. Wouldn't it be great to be able to do the same in virtual reality? However, accurate hand motions are not trivial to capture. In this course, we present the current state of the art when it comes to virtual hands. Starting with current examples for controlling and depicting hands in virtual reality (VR), we dive into the latest methods and technologies to capture hand motions. As hands can currently not be captured in every situation and as constraints stopping us from intersecting with objects are typically not available in VR, we present research on how to synthesize hand motions and simulate grasping motions. Finally, we provide an overview of our knowledge of how virtual hands are being perceived, resulting in practical tips on how to represent and handle virtual hands. Our goals are (a) to present a broad state of the art of the current usage of hands in VR, (b) to provide more in-depth knowledge about the functioning of current hand motion tracking and hand motion synthesis methods, (c) to give insights on our perception of hand motions in VR and how to use those insights when developing new applications, and finally (d) to identify gaps in knowledge that might be investigated next. While the focus of this course is on VR, many parts also apply to augmented reality, mixed reality, and character animation in general, and some content originates from these areas.

References Powered by Scopus

Deep residual learning for image recognition

174322Citations
N/AReaders
Get full text

A new approach to linear filtering and prediction problems

23083Citations
N/AReaders
Get full text

PointNet: Deep learning on point sets for 3D classification and segmentation

10869Citations
N/AReaders
Get full text

Cited by Powered by Scopus

A Comprehensive Review of Data-Driven Co-Speech Gesture Generation

45Citations
N/AReaders
Get full text

Exploring input approximations for control panels in virtual reality

7Citations
N/AReaders
Get full text

VR-HandNet: A Visually and Physically Plausible Hand Manipulation System in Virtual Reality

5Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Jörg, S., Ye, Y., Neff, M., Mueller, F., & Zordan, V. (2020). Virtual hands in VR: Motion capture, synthesis, and perception. In ACM SIGGRAPH 2020 Courses, SIGGRAPH 2020. Association for Computing Machinery. https://doi.org/10.1145/3388769.3407494

Readers over time

‘20‘21‘22‘23‘2402468

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 5

56%

Professor / Associate Prof. 2

22%

Lecturer / Post doc 2

22%

Readers' Discipline

Tooltip

Computer Science 5

63%

Social Sciences 1

13%

Engineering 1

13%

Psychology 1

13%

Save time finding and organizing research with Mendeley

Sign up for free
0