Abstract
The deaf-mutes population is constantly feeling helpless when others do not understand them and vice versa. To fill this gap, this study implements a CNN-based neural network, Convolutional Based Attention Module (CBAM), to recognise Malaysian Sign Language (MSL) in videos recognition. This study has created 2071 videos for 19 dynamic signs. Two different experiments were conducted for dynamic signs, using CBAM-3DResNet implementing ‘Within Blocks’ and ‘Before Classifier’ methods. Various metrics such as the accuracy, loss, precision, recall, F1-score, confusion matrix, and training time were recorded to evaluate the models’ efficiency. Results showed that CBAM-ResNet models had good performances in videos recognition tasks, with recognition rates of over 90% with little variations. CBAM-ResNet ‘Before Classifier’ is more efficient than ‘Within Blocks’ models of CBAM-ResNet. All experiment results indicated the CBAM-ResNet ‘Before Classifier’ efficiency in recognising Malaysian Sign Language and its worth of future research.
Author supplied keywords
Cite
CITATION STYLE
Khan, R. U., Wong, W. S., Ullah, I., Algarni, F., Ul Haq, M. I., bin Barawi, M. H., & Khan, M. A. (2022). Evaluating the efficiency of CBAM-Resnet using Malaysian sign language. Computers, Materials and Continua, 71(2), 2755–2772. https://doi.org/10.32604/cmc.2022.022471
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.