Development of Al-Quran sign language classification based on convolutional neural network

1Citations
Citations of this article
40Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Sign language is the main form of communication used by deaf people. Most of their activities, like; speaking, reading, and learning, involved sign languages. For reading Al-Quran, deaf people used Arabic sign language to read the ayah Al-Quran. For them, assistive technologies to aid them in the process of learning and teaching of Al-Quran is very important, since the traditional method is very difficult and challenging. One of the reasons is that, traditionally, teachers need to know Arabic Sign Languages (ArSL) first in order to teach them to learn Al-Quran. Currently, assistive technology, it still considered to be relatively new and not well developed. In Malaysia and Indonesia, most of the developed technologies are mobile app, and web-based device, which both of them required continuous internet connection and only suitable for personal used. Previous research on assistive technologies can be classified into two types of devices. First, a sensor-based device, and second is the image-based device. Both of them have their advantages and disadvantages. For this project, the only image-based device is focused since the scope of this project is limited to supervised machine learning (Convolution neural network, CNN) that developed with accuracy above 80% in training and testing. The accuracy of CNN model can be explained based on the resulting pattern obtained from the training and testing. Next, the resulting pattern can be described as overfitting, underfitting, or optimum. This project shows that, with the appropriate tuning of hyperparameters based on the resulting pattern, the accuracy of the model can be improved. This CNN model is developed from scratch through trial and error tuning method since there are no formal techniques. Lastly, the CNN model is converted into a Tensorflow Lite format, which can ready to be integrated with mobile applications.

Cite

CITATION STYLE

APA

Nizam, M. Z. M., Saad, S. M., Suhaimi, M. A., Dzahir, M. A. M., Rahim, S. Z. A., & Dzahir, M. A. M. (2021). Development of Al-Quran sign language classification based on convolutional neural network. In AIP Conference Proceedings (Vol. 2347). American Institute of Physics Inc. https://doi.org/10.1063/5.0051490

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free