Sign language translation through deep learning is a popular topic among researchers nowadays. It opens the doors of communication for deaf and mute people by translating sign language gestures. The translation of input sign language gestures into text is called the sign language translation system (SLTS). In this paper, optimised machine learning-based SLTS for Indian sign language (ISL) has been proposed to facilitate deaf-mute persons. Further, this paper presents a simulation analysis of the impact of the number of convolution layers, size of stride function, epochs, and activation function on the accuracy of translation of ISL gestures. An optimised ISL translation system (ISLTS) for fingerspelled alphanumeric data of 36 classes using a convolution neural network (CNN) with a novel RADAM_NORM optimiser has been proposed. The proposed system has been implemented using two datasets the first customised ISL alphanumeric dataset has been taken from Kaggle and the second dataset has been prepared by the author consisting of 36 classes and nearly 50K images. The accuracy of the proposed ISLTS on the first dataset is 99.446 % and on the second dataset is 97.889%.
CITATION STYLE
Sabharwal, S., & Singla, P. (2023). Optimised Machine Learning-based Translation of Indian Sign Language to Text. International Journal of Intelligent Engineering and Systems, 16(4), 398–408. https://doi.org/10.22266/ijies2023.0831.32
Mendeley helps you to discover research relevant for your work.