Integrated Mediapipe with a CNN Model for Arabic Sign Language Recognition

  • Moustafa A
  • Mohd Rahim M
  • Bouallegue B
  • et al.
3Citations
Citations of this article
28Readers
Mendeley users who have this article in their library.

Abstract

Deaf and dumb people struggle with communicating on a day-to-day basis. Current advancements in artificial intelligence (AI) have allowed this communication barrier to be removed. A letter recognition system for Arabic sign language (ArSL) has been developed as a result of this effort. The deep convolutional neural network (CNN) structure is used by the ArSL recognition system in order to process depth data and to improve the ability for hearing-impaired to communicate with others. In the proposed model, letters of the hand-sign alphabet and the Arabic alphabet would be recognized and identified automatically based on user input. The proposed model should be able to identify ArSL with a rate of accuracy of 97.1%. In order to test our approach, we carried out a comparative study and discovered that it is able to differentiate between static indications with a higher level of accuracy than prior studies had achieved using the same dataset.

Cite

CITATION STYLE

APA

Moustafa, A. M. J. A., Mohd Rahim, M. S., Bouallegue, B., Khattab, M. M., Soliman, A. M., Tharwat, G., & Ahmed, A. M. (2023). Integrated Mediapipe with a CNN Model for Arabic Sign Language Recognition. Journal of Electrical and Computer Engineering, 2023, 1–15. https://doi.org/10.1155/2023/8870750

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free