Indonesian Sign Language Image Detection Using Convolutional Neural Network (CNN) Method

  • Sihananto A
  • Safitri E
  • Maulana Y
  • et al.
N/ACitations
Citations of this article
35Readers
Mendeley users who have this article in their library.

Abstract

In Indonesia, there are two sign languages utilized by the deaf community, SIBI and BISINDO. Unfortunately, the majority of non-deaf individuals and deaf companions are not proficient in sign language. To address this communication gap, information systems can play a pivotal role in recognizing sign language speech. Recently, researchers conducted a study using the Convolutional Neural Network (CNN) algorithm to predict sign language for both SIBI and BISINDO datasets. The aim was to develop a model that could accurately translate sign language into written or spoken language, thus bridging the gap between deaf and non-deaf individuals. The research found that the CNN algorithm performed optimally on epoch 50 for SIBI with a testing accuracy of 93.29%, while for BISINDO, it achieved the best result on epoch 40 with a testing accuracy of 82.32%. These results suggest that the CNN algorithm has the potential to accurately recognize and translate sign language, thus improving communication between deaf and non-deaf individuals in Indonesia.

Cite

CITATION STYLE

APA

Sihananto, A. N., Safitri, E. M., Maulana, Y., Fakhruddin, F., & Yudistira, M. E. (2023). Indonesian Sign Language Image Detection Using Convolutional Neural Network (CNN) Method. Inspiration: Jurnal Teknologi Informasi Dan Komunikasi, 13(1), 13–21. https://doi.org/10.35585/inspir.v13i1.37

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free