Sign Language Recognition Based on 3D Convolutional Neural Networks

14Citations
Citations of this article
27Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The inclusion of disabled people is still a recurring problem throughout the world. For the hearing impaired, the barrier imposed by the sign language spoken by a small part of the population imposes limitations that interfere in the quality of life of these people. The popularization or even automation of sign language recognition can take their lives to a higher level. Understanding the importance of sign language recognition for the hearing impaired we propose a 3D CNN architecture for the recognition of 64 classes of gestures from Argentinian Sign Language (LSA64). We demonstrate the efficiency of the method when compared to traditional methods based on hand-crafted features and that its results outperform most deep learning-based work reaching 93.9% of accuracy.

Cite

CITATION STYLE

APA

Neto, G. M. R., Junior, G. B., de Almeida, J. D. S., & de Paiva, A. C. (2018). Sign Language Recognition Based on 3D Convolutional Neural Networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10882 LNCS, pp. 399–407). Springer Verlag. https://doi.org/10.1007/978-3-319-93000-8_45

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free