Translation of Sign Language Finger-Spelling to Text using Image Processing

  • Modi K
  • More A
N/ACitations
Citations of this article
9Readers
Mendeley users who have this article in their library.

Abstract

It is difficult for most of us to imagine, but many who are deaf-mute rely on sign language as their primary means of communication. They, in essence, hear and talk through their hands. Sign languages are visual languages. They are natural languages which are used by many deaf-mute people all over the world. In sign language the hands convey most of the information. Hence, vision-based automatic sign language recognition systems have to extract relevant hand features from real life image sequences to allow correct and stable gesture classification. In our proposed system, we intend to recognize some very basic elements of sign language and to translate them to text. Firstly, the video shall be captured frame-by-frame, the captured video will be processed and the appropriate image will be extracted, this retrieved image will be further processed using BLOB analysis and will be sent to the statistical database; here the captured image shall compared with the one saved in the database and the matched image will be used to determine the performed alphabet sign in the language. Here, we will be implementing only American Sign Language Finger-spellings, and we will construct words and sentences with them. General Terms Sign language translation, Gesture recognition system.

Cite

CITATION STYLE

APA

Modi, K., & More, A. (2013). Translation of Sign Language Finger-Spelling to Text using Image Processing. International Journal of Computer Applications, 77(11), 32–37. https://doi.org/10.5120/13440-1313

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free