Hand Gesture Recognition for Sign Languages Using 3DCNN for Efficient Detection

1Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Sign Language Recognition aims at providing an efficient and accurate mechanism for the recognition of hand gestures made in sign languages and converting them into text and speech. Sign language is a means of communication using bodily movements, especially using the hands and arms. With sign language recognition methods, dialog communication between the deaf and hearing society can become a reality. In this project, we carry out sign language recognition by building 3D convolutional neural network (3DCNN) models that can perform multi-class prediction on input videos containing hand gestures. On detection of the input, both text and speech are generated and presented as output to the user. In addition to this, we also implement real-time video recognition and continuous sign language recognition for multi-word videos. We present a method for recognition of words in three languages – Tamil Sign Language (TSL), Indian Sign Language (ISL), and American Sign Language (ASL), and outperform state-of-the-art alternatives in with accuracies of 97.5%, 99.75% and 98% respectively.

Cite

CITATION STYLE

APA

Elangovan, T., Arockia Xavier Annie, R., Sundaresan, K., & Pradhakshya, J. D. (2023). Hand Gesture Recognition for Sign Languages Using 3DCNN for Efficient Detection. In Lecture Notes in Computational Vision and Biomechanics (Vol. 38, pp. 215–233). Springer Science and Business Media B.V. https://doi.org/10.1007/978-3-031-10015-4_19

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free