Dynamic Gesture Classification for Vietnamese Sign Language Recognition

  • Vo D
  • Huynh H
  • Doan P
  • et al.
N/ACitations
Citations of this article
21Readers
Mendeley users who have this article in their library.

Abstract

This paper presents an approach of feature extraction and classification for recognizing continuous dynamic gestures corresponding to Vietnamese Sign Language (VSL). Input data are captured by the depth sensor of a Microsoft Kinect, which is almost not affected by the light of environment. In detail, each gesture is represented by a volume corresponding to a sequence of depth images. The feature extraction stage is performed by dividing such volume into a 3D grid of same-size blocks in which each one is then converted into a scalar value. This step is followed by the process of classification. The well-known method Support Vector Machine (SVM) is employed in this work, and the Hidden Markov Model (HMM) technique is also applied in order to provide a comparison on recognition accuracy. Besides, a dataset of 3000 samples corresponding to 30 dynamic gestures in VSL was created by 5 volunteers. The experiments on this dataset to validate the approach and that shows the promising results with average accuracy up to 95%.

Cite

CITATION STYLE

APA

Vo, D.-H., Huynh, H.-H., Doan, P.-M., & Meunier, J. (2017). Dynamic Gesture Classification for Vietnamese Sign Language Recognition. International Journal of Advanced Computer Science and Applications, 8(3). https://doi.org/10.14569/ijacsa.2017.080357

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free