A vision-based deep learning approach for independent-users Arabic sign language interpretation

77Citations
Citations of this article
91Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

More than 5% of the people around the world are deaf and have severe difficulties in communicating with normal people according to the World Health Organization (WHO). They face a real challenge to express anything without an interpreter for their signs. Nowadays, there are a lot of studies related to Sign Language Recognition (SLR) that aims to reduce this gap between deaf and normal people as it can replace the need for an interpreter. However, there are a lot of challenges facing the sign recognition systems such as low accuracy, complicated gestures, high-level noise, and the ability to operate under variant circumstances with the ability to generalize or to be locked to such limitations. Hence, many researchers proposed different solutions to overcome these problems. Each language has its signs and it can be very challenging to cover all the languages’ signs. The current study objectives: (i) presenting a dataset of 20 Arabic words, and (ii) proposing a deep learning (DL) architecture by combining convolutional neural network (CNN) and recurrent neural network (RNN). The suggested architecture reported 98% accuracy on the presented dataset. It also reported 93.4% and 98.8% for the top-1 and top-5 accuracies on the UCF-101 dataset.

Cite

CITATION STYLE

APA

Balaha, M. M., El-Kady, S., Balaha, H. M., Salama, M., Emad, E., Hassan, M., & Saafan, M. M. (2023). A vision-based deep learning approach for independent-users Arabic sign language interpretation. Multimedia Tools and Applications, 82(5), 6807–6826. https://doi.org/10.1007/s11042-022-13423-9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free