Mouth gesture interface for vlc media player

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This proposed work presents a frame work of mouth gesture recognition for Human Computer Interface (HCI). It replaces the traditional input devices such as mouse and keyboard which allows a user to work on a computer using his/her mouth gestures. This work is aimed at helping severely disabled and paralyzed people. The entire work includes mouth detection, region extraction, gesture classification, and interface creation with computer applications. Initially face and mouth regions are detected using Haar-cascaded classifier. Secondly, the gesture recognition is done using the concept of Deep learning through Convolutional Neural Network (CNN). The mouth gestures are recognized and classified as mouth close, mouth open, tongue left and tongue right. Finally an HCI is created by mapping the mouth gestures into VLC player operations such as play, pause, forward jump and backward jump. The performance of the proposed method is measured and compared with other existing methods. This work is found to perform better than the other methods.

Cite

CITATION STYLE

APA

Rajaprabu, A., & Geetha, A. (2019). Mouth gesture interface for vlc media player. International Journal of Innovative Technology and Exploring Engineering, 8(6 Special Issue 4), 5–8. https://doi.org/10.35940/ijitee.F1002.0486S419

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free