Gesture recognition and machine learning applied to sign language translation

17Citations
Citations of this article
37Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper we propose an intelligent system for translating sign language into text. This approach consists of hardware and software. The hardware is formed by flex, contact, and inertial sensors mounted on a polyester-nylon glove. The software consists of a classification algorithm based on the k-nearest neighbors, decision trees, and the dynamic time warping algorithms. The proposed system is able to recognize static and dynamic gestures. This system can learn to classify the specific gesture patterns of any person. The proposed system was tested at translating 61 letters, numbers, and words from the Ecuadorian sign language. Experimental results demonstrate that our system has a classification accuracy of 91.55%. This result is a significant improvement compared with the results obtained in previous related works.

Cite

CITATION STYLE

APA

Estrada Jiménez, L. A., Benalcázar, M. E., & Sotomayor, N. (2017). Gesture recognition and machine learning applied to sign language translation. In IFMBE Proceedings (Vol. 60, pp. 233–236). Springer Verlag. https://doi.org/10.1007/978-981-10-4086-3_59

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free