Hand shape and 3D pose estimation using depth data from a single cluttered frame

36Citations
Citations of this article
31Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper describes a method that, given an input image of a person signing a gesture in a cluttered scene, locates the gesturing arm, automatically detects and segments the hand and finally creates a ranked list of possible shape class, 3D pose orientation and full hand configuration parameters. The clutter-tolerant hand segmentation algorithm is based on depth data from a single image captured with a commercially available depth sensor, namely the Kinect™. Shape and 3D pose estimation is formulated as an image database retrieval method where given a segmented hand the best matches are extracted from a large database of synthetically generated hand images. Contrary to previous approaches this clutter-tolerant method is all-together: user-independent, automatically detects and segments the hand from a single image (no multi-view or motion cues employed) and provides estimation not only for the 3D pose orientation but also for the full hand articulation parameters. The performance of this approach is quantitatively and qualitatively evaluated on a dataset of real images of American Sign Language (ASL) handshapes. © 2012 Springer-Verlag.

Cite

CITATION STYLE

APA

Doliotis, P., Athitsos, V., Kosmopoulos, D., & Perantonis, S. (2012). Hand shape and 3D pose estimation using depth data from a single cluttered frame. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7431 LNCS, pp. 148–158). https://doi.org/10.1007/978-3-642-33179-4_15

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free