Although there have been attempts to tackle the problem of hand gesture recognition “in-the-wild”, deployment of such methods in practical applications still face major issues such as view point change, clustered background and low resolution of hand regions. In this paper, we investigate these issues based on a frame-work that is intensively designed in terms of both varying features and multi-view analysis. In the framework, we embed both hand-crafted features and learnt features using Convolutional Neural Network (CNN) for gesture representation at single view. We then employ multi-view discriminant analysis (MvDA) based techniques to build a discriminant common space by jointly learning multiple view-specific linear transforms from multiple views. To evaluate the effectiveness of the proposed frame-work, we construct a new multi-view dataset of twelve gestures. These gestures are captured by five cameras uniformly spaced on the half of a circle frontally surrounding the user in the context of human machine interaction. The performance of each designed scheme in the proposed framework is then evaluated. We report accuracy and discuss the results in view of developing practical applications. Experimental results show promising performance for developing a natural and friendly hand-gesture based applications.
CITATION STYLE
Doan, H. G., Tran, T. H., Vu, H., Le, T. L., Nguyen, V. T., Dinh, S. V., … Nguyen, D. C. (2020). Multi-view Discriminant Analysis for Dynamic Hand Gesture Recognition. In Communications in Computer and Information Science (Vol. 1180 CCIS, pp. 196–210). Springer. https://doi.org/10.1007/978-981-15-3651-9_18
Mendeley helps you to discover research relevant for your work.