Integrated deep learning structures for hand gesture recognition

0Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, object control with hand movements is proposed for controlling distant objects even when the user is far from the system. This method is based on finding hands, predicting the states and direction of the hand movement. This human-computer interface (HCI) is an assistive system for users at near/far from objects. The model is specifically designed for controlling computer mouse on big screens during the formal presentation. By moving the hand to the left, right, up and down moves the mouse pointer and sends mouse button command using hand states. Here, close hand triggers mouse button, until the same hand is opened. In this system, Single Shot Multi Box Detection (SSD) architecture is used for object detection and Convolutional Neural Network (CNN) is used for predicting hand states. This integrated system allows users to control the mouse from a distant position without using any hardware. The test results show that this system is robust and accurate. This invention uses a single camera and aids users who are far from the computer during a presentation to shuffle through the slides.

Cite

CITATION STYLE

APA

Korkmaz, S. (2019). Integrated deep learning structures for hand gesture recognition. In Advances in Intelligent Systems and Computing (Vol. 896, pp. 129–136). Springer Verlag. https://doi.org/10.1007/978-3-030-04164-9_19

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free