Hand location classification from 3D signing virtual avatars using neural networks

2Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

3D sign language data is actively being generated and exchanged. Sign language recognition from 3D data is then a promising research axis aiming to build new understanding and efficient indexing of this type of content. Model-based recognition strategies are commonly based on recognizing sign language features separately. Those features are: the handshape, the hand position, the orientation and movement. In this paper, we propose a novel approach for hand position classification in the space. The approach is based on a two-layer feed-forward network and generates classifications which are very close to human perception. Evaluations have been made by 10 PhD students and 2 sign language experts. The evaluation of the results shows the superiority of our approach compared with classic methods based on the calculation of the distance between the hand and the face as well as the method of K nearest neighbors. In fact, the misclassification average of our methods was the lowest with 4.58%. © 2014 Springer International Publishing.

Cite

CITATION STYLE

APA

Jaballah, K., & Jemni, M. (2014). Hand location classification from 3D signing virtual avatars using neural networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8548 LNCS, pp. 439–445). Springer Verlag. https://doi.org/10.1007/978-3-319-08599-9_66

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free