Hand gesture recognition (HGR) is an essential technology with applications spanning human-computer interaction, robotics, augmented reality, and virtual reality. This technology enables more natural and effortless interaction with computers, resulting in an enhanced user experience. As HGR adoption increases, it plays a crucial role in bridging the gap between humans and technology, facilitating seamless communication and interaction. In this study, a novel deep learning approach is proposed for the development of a Hand Gesture Interface (HGI) that enables the control of graphical user interfaces without physical touch on personal computers. The methodology encompasses the analysis, design, implementation, and deployment of the HGI. Experimental results on a hand gesture recognition system indicate that the proposed approach improves accuracy and reduces response time compared to existing methods. The system is capable of controlling various multimedia applications, including VLC media player, Microsoft Word, and PowerPoint. In conclusion, this approach offers a promising solution for the development of HGIs that facilitate efficient and intuitive interactions with computers, making communication more natural and accessible for users.
CITATION STYLE
Elmagrouni, I., Ettaoufik, A., Aouad, S., & Maizate, A. (2023). A Deep Learning Framework for Hand Gesture Recognition and Multimodal Interface Control. Revue d’Intelligence Artificielle, 37(4), 881–887. https://doi.org/10.18280/ria.370407
Mendeley helps you to discover research relevant for your work.