Real-Time Sign Language Recognition and Translation Using Deep Learning Techniques

  • Tazyeen Fathima
  • Ashif Alam
  • Ashish Gangwar
  • et al.
N/ACitations
Citations of this article
24Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Sign Language Recognition (SLR) recognizes hand gestures and produces the corresponding text or speech. Despite advances in deep learning, the SLR still faces challenges in terms of accuracy and visual quality. Sign Language Translation (SLT) aims to translate sign language images or videos into spoken language, which is hampered by limited language comprehension datasets. This paper presents an innovative approach for sign language recognition and conversion to text using a custom dataset containing 15 different classes, each class containing 70-75 different images. The proposed solution uses the YOLOv5 architecture, a state-of-the-art Convolutional Neural Network (CNN) to achieve robust and accurate sign language recognition. With careful training and optimization, the model achieves impressive mAP values (average accuracy) of 92% to 99% for each of the 15 classes. An extensive dataset combined with the YOLOv5 model provides effective real-time sign language interpretation, showing the potential to improve accessibility and communication for the hearing impaired. This application lays the groundwork for further advances in sign language recognition systems with implications for inclusive technology applications.

Cite

CITATION STYLE

APA

Tazyeen Fathima, Ashif Alam, Ashish Gangwar, Dev Kumar Khetan, & Prof. Ramya K. (2024). Real-Time Sign Language Recognition and Translation Using Deep Learning Techniques. International Research Journal on Advanced Engineering Hub (IRJAEH), 2(02), 93–97. https://doi.org/10.47392/irjaeh.2024.0018

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free