Hand gesture recognition and voice conversion for deaf and Dumb

5Citations
Citations of this article
44Readers
Mendeley users who have this article in their library.

Abstract

In this paper, we purpose a Hand gesture recognition model which can be used in real time application. This model is based on the mediapipe frame work of the google, Tensor flow in openCv and python and classification using feed forward neural network with keras model. The structure of the proposed work consists of 3 modules: Grabbing the frames, detecting hand landmarks and classification. The proposed model has the accuracy 95.7% at recognizing 10 kinds of hand gestures(Thumbs up, Thumbs down, Peace, Smile, Rock, Ok, Fist, livelong, call me, stop). A hand gesture recognition model that reacts rapidly and with generally acceptable accuracy is one of this work's primary achievements and pre trained model for feature extraction. The unique approach of the suggested approach is that it detects hand landmarks using Google's MediaPipe, which is faster and more accurate than traditional methods that rely on geometry, form, and edge data. For modelling sequence data and for recognising gestures, the LSTM model has proven to be quite successful.

Cite

CITATION STYLE

APA

Mopidevi, S., Biradhar, S., Bobberla, N., & Buddati, K. S. (2023). Hand gesture recognition and voice conversion for deaf and Dumb. In E3S Web of Conferences (Vol. 391). EDP Sciences. https://doi.org/10.1051/e3sconf/202339101060

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free