Real Time Conversion of Hand Gestures to Speech using Vision Based Technique

  • et al.
N/ACitations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Sign Language is one of the most common approaches of communication usually used by people having hearing and speech impairment. These languages consist of well-defined set of gestures or pattern and sequence of actions that conveys meaningful words and sentences. The paper presents different algorithms and techniques for automation of single hand gesture detection and recognition using vision based methods. The paper uses basic structure of hand and properties like centroid for detecting the pattern formed by the fingers and thumb and assigning code bits i.e. converting each gesture into a set of 5 digits representation and motion is detected using movement of centroid in each frame. The paper uses techniques like K-means Clustering or Thresholding for background elimination; Convex Hull or a proposed algorithm for peak detection and text to speech API for conversion of words/sentences corresponding to gestures to speech. Combinations of different techniques like thresholding and convex hull or Clustering and proposed algorithm is implemented and results are compared.

Cite

CITATION STYLE

APA

Mundada, S. G., Khurana, K., & Bagora, A. (2019). Real Time Conversion of Hand Gestures to Speech using Vision Based Technique. International Journal of Innovative Technology and Exploring Engineering, 8(9), 3184–3190. https://doi.org/10.35940/ijitee.i8748.078919

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free