Forward Hand Gesture Spotting and Prediction Using HMM-DNN Model

4Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

Automatic key gesture detection and recognition are difficult tasks in Human–Computer Interaction due to the need to spot the start and the end points of the gesture of interest. By integrating Hidden Markov Models (HMMs) and Deep Neural Networks (DNNs), the present research provides an autonomous technique that carries out hand gesture spotting and prediction simultaneously with no time delay. An HMM can be used to extract features, spot the meaning of gestures using a forward spotting mechanism with varying sliding window sizes, and then employ Deep Neural Networks to perform the recognition process. Therefore, a stochastic strategy for creating a non-gesture model using HMMs with no training data is suggested to accurately spot meaningful number gestures (0–9). The non-gesture model provides a confidence measure, which is utilized as an adaptive threshold to determine where meaningful gestures begin and stop in the input video stream. Furthermore, DNNs are extremely efficient and perform exceptionally well when it comes to real-time object detection. According to experimental results, the proposed method can successfully spot and predict significant motions with a reliability of 94.70%.

Cite

CITATION STYLE

APA

Elmezain, M., Alwateer, M. M., El-Agamy, R., Atlam, E., & Ibrahim, H. M. (2023). Forward Hand Gesture Spotting and Prediction Using HMM-DNN Model. Informatics, 10(1). https://doi.org/10.3390/informatics10010001

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free