Non-touch sign word recognition based on dynamic hand gesture using hybrid segmentation and CNN feature fusion

46Citations
Citations of this article
48Readers
Mendeley users who have this article in their library.

Abstract

Hand gesture-based sign language recognition is a prosperous application of human- computer interaction (HCI), where the deaf community, hard of hearing, and deaf family members communicate with the help of a computer device. To help the deaf community, this paper presents a non-touch sign word recognition system that translates the gesture of a sign word into text. However, the uncontrolled environment, perspective light diversity, and partial occlusion may greatly affect the reliability of hand gesture recognition. From this point of view, a hybrid segmentation technique including YCbCr and SkinMask segmentation is developed to identify the hand and extract the feature using the feature fusion of the convolutional neural network (CNN). YCbCr performs image conversion, binarization, erosion, and eventually filling the hole to obtain the segmented images. SkinMask images are obtained by matching the color of the hand. Finally, a multiclass SVM classifier is used to classify the hand gestures of a sign word. As a result, the sign of twenty common words is evaluated in real time, and the test results confirmthat this systemcan not only obtain better-segmented images but also has a higher recognition rate than the conventional ones.

Cite

CITATION STYLE

APA

Rahim, M. A., Islam, M. R., & Shin, J. (2019). Non-touch sign word recognition based on dynamic hand gesture using hybrid segmentation and CNN feature fusion. Applied Sciences (Switzerland), 9(18). https://doi.org/10.3390/app9183790

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free