Investigation of sign language recognition performance by integration of multiple feature elements and classifiers

2Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Sign languages are used by healthy individuals when communicating with those who are hearing or speech impaired as well by those with hearing or speech impediments. It is quite difficult to acquire sign language skills since there are vast number of sign language words and some signing motions are very complex. Several attempts at machine translation have been investigated for a limited number of sign language motions by using KINECT and a data glove, which is equipped with a strain gauge to monitor the angles at which fingers are bent, to detect hand motions and hand shapes. One of the key features of our proposed method is using an optical camera and colored gloves for detection of sign language motion. The optical camera is implemented in a smartphone. This makes it possible to remove the limitation of using area and occasion as a machine translation tool. The authors propose two new schemes. One is to add the two feature elements, that is, hand direction obtained from the angle between the wrist and fingertips, and hand rotation calculated from the visible size of the palm and wrist incorporating the four conventional elements comprising motion trajectory, motion velocity, hand position and hand shape. The other is integrating the results which is obtained by each classifier to enhance the recognition performance. The six kinds of classifiers have been applied to 35 sign language motions. A total of 3150 pieces of motion data, that is, 2100 pieces of motion data as training data and 1050 pieces as evaluation data, were used to evaluate the proposed method. The recognition results were examined by integrating the feature elements and classifier. The success rate for 35 words was respectively 76.2% and 94.2%, for the selection of the first ranked answer, and the selection of the first, second or third ranked answers. These values suggest that the proposed method could be used as a review tool for assessing how well learner have mastered sign language motions.

Cite

CITATION STYLE

APA

Ozawa, T., Okayasu, Y., Dahlan, M., Nishimura, H., & Tanaka, H. (2018). Investigation of sign language recognition performance by integration of multiple feature elements and classifiers. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10904 LNCS, pp. 291–305). Springer Verlag. https://doi.org/10.1007/978-3-319-92043-6_25

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free