Results and analysis of the ChaLearn gesture challenge 2012

39Citations
Citations of this article
36Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The Kinect™ camera has revolutionized the field of computer vision by making available low cost 3D cameras recording both RGB and depth data, using a structured light infrared sensor. We recorded and made available a large database of 50,000 hand and arm gestures. With these data, we organized a challenge emphasizing the problem of learning from very few examples. The data are split into subtasks, each using a small vocabulary of 8 to 12 gestures, related to a particular application domain: hand signals used by divers, finger codes to represent numerals, signals used by referees, Marshalling signals to guide vehicles or aircrafts, etc. We limited the problem to single users for each task and to the recognition of short sequences of gestures punctuated by returning the hands to a resting position. This situation is encountered in computer interface applications, including robotics, education, and gaming. The challenge setting fosters progress in transfer learning by providing for training a large number of subtasks related to, but different from the tasks on which the competitors are tested. © 2013 Springer-Verlag.

Cite

CITATION STYLE

APA

Guyon, I., Athitsos, V., Jangyodsuk, P., Escalante, H. J., & Hamner, B. (2013). Results and analysis of the ChaLearn gesture challenge 2012. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7854 LNCS, pp. 186–204). https://doi.org/10.1007/978-3-642-40303-3_19

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free