Abstract
Hand gestures and voice inputs have been measured as the vital communication component for the past few decades. Here, deep learning-based Reversible Convolutional Neural Network (Rev-CNN) is explicitly modelled to predict gesture-based sign language. Similarly, this work concentrates on modelling a reversible model to identify the voice gesture into sign language. It shows reversible representation by attaining superior accuracy with a lesser amount of model parameters and various CNN architecture. Here, the efficiency of the reversible model is evaluated with the prevailing G-CNN, VGG-11/16 model over the testing and training environment. Here, two diverse datasets like ROBITA Indian Sign Language Gesture Database and the standard voice-input dataset, is considered for evaluation purpose. The highest prediction accuracy of 94.38 % and 97.89 % is attained using the proposed reversible CNN model over the other approaches like GCNN, VGG-11 and VGG-16 model. The experimental outcomes and metrics like loss function, error rate and execution time are measured and compared with different methods like GCNN, VGG-11/16. Additionally, other efficiency metrics are utilized to determine the efficiency of the anticipated model. The model outperforms the existing approaches by categorizing the gestures with reduced error rate. The prediction accuracy of the reversible CNN (dataset 1) is 95.38 % and for dataset 2 is 96.69%. Similarly, the execution time is 5.5 minutes
Author supplied keywords
Cite
CITATION STYLE
Govindan, A. P., & Kumarappan, A. (2022). A Reversible Convolutional Neural Network Model for Sign Language Recognition. International Journal of Intelligent Engineering and Systems, 15(2), 163–174. https://doi.org/10.22266/ijies2022.0430.16
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.