Deep Transfer Learning for Sign Language Image Classification: A Bisindo Dataset Study

  • Rachmawati I
  • Yunanda R
  • Hidayat M
  • et al.
N/ACitations
Citations of this article
33Readers
Mendeley users who have this article in their library.

Abstract

This study aims to identify and categorize the BISINDO sign language dataset, primarily consisting of image data. Deep learning techniques are used, with three pre-trained models: ResNet50 for training, MobileNetV4 for validation, and InceptionV3 for testing. The primary objective is to evaluate and compare the performance of each model based on the loss function derived during training. The training success rate provides a rough idea of the ResNet50 model's understanding of the BISINDO dataset, while MobileNetV4 measures validation loss to understand the model's generalization abilities. The InceptionV3-evaluated test loss serves as the ultimate litmus test for the model's performance, evaluating its ability to classify unobserved sign language images. The results of these exhaustive experiments will determine the most effective model and achieve the highest performance in sign language recognition using the BISINDO dataset.

Cite

CITATION STYLE

APA

Rachmawati, I. D. A., Yunanda, R., Hidayat, M. F., & Wicaksono, P. (2023). Deep Transfer Learning for Sign Language Image Classification: A Bisindo Dataset Study. Engineering, MAthematics and Computer Science Journal (EMACS), 5(3), 175–180. https://doi.org/10.21512/emacsjournal.v5i3.10621

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free