Recent Advances on Deep Learning for Sign Language Recognition

3Citations
Citations of this article
75Readers
Mendeley users who have this article in their library.

Abstract

Sign language, a visual-gestural language used by the deaf and hard-of-hearing community, plays a crucial role in facilitating communication and promoting inclusivity. Sign language recognition (SLR), the process of automatically recognizing and interpreting sign language gestures, has gained significant attention in recent years due to its potential to bridge the communication gap between the hearing impaired and the hearing world. The emergence and continuous development of deep learning techniques have provided inspiration and momentum for advancing SLR. This paper presents a comprehensive and up-to-date analysis of the advancements, challenges, and opportunities in deep learning-based sign language recognition, focusing on the past five years of research. We explore various aspects of SLR, including sign data acquisition technologies, sign language datasets, evaluation methods, and different types of neural networks. Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) have shown promising results in fingerspelling and isolated sign recognition. However, the continuous nature of sign language poses challenges, leading to the exploration of advanced neural network models such as the Transformer model for continuous sign language recognition (CSLR). Despite significant advancements, several challenges remain in the field of SLR. These challenges include expanding sign language datasets, achieving user independence in recognition systems, exploring different input modalities, effectively fusing features, modeling co-articulation, and improving semantic and syntactic understanding. Additionally, developing lightweight network architectures for mobile applications is crucial for practical implementation. By addressing these challenges, we can further advance the field of deep learning for sign language recognition and improve communication for the hearing-impaired community.

Cite

CITATION STYLE

APA

Zhang, Y., & Jiang, X. (2024, March 11). Recent Advances on Deep Learning for Sign Language Recognition. CMES - Computer Modeling in Engineering and Sciences. Tech Science Press. https://doi.org/10.32604/cmes.2023.045731

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free