American sign language character recognition using convolution neural network

32Citations
Citations of this article
33Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Communication is an important part of our lives. Deaf and dumb people being unable to speak and listen, experience a lot of problems while communicating with normal people. There are many ways by which people with these disabilities try to communicate. One of the most prominent ways is the use of sign language, i.e. hand gestures. It is necessary to develop an application for recognizing gestures and actions of sign language so that deaf and dumb people can communicate easily with even those who don’t understand sign language. The objective of this work is to take an elementary step in breaking the barrier in communication between the normal people and deaf and dumb people with the help of sign language. The image dataset in this work consists of 2524 ASL gestures which were used as input for the pre-trained VGG16 model. VGG16 is a vision model developed by the Vision Geometry Group from oxford. The accuracy of the model obtained using the Convolution Neural Network was about 96%.

Cite

CITATION STYLE

APA

Masood, S., Thuwal, H. C., & Srivastava, A. (2018). American sign language character recognition using convolution neural network. In Smart Innovation, Systems and Technologies (Vol. 78, pp. 403–412). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-981-10-5547-8_42

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free