Presentation interface based on gesture and voice recognition

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we introduce a Kinect based interface that recognizes gestures and voice. We have developed an interface to control presentations such as speeches or lectures. It is possible to receive the coordinates of the body, and recognize gestures and positions of the hand. Data received by the camera in Kinect are used to create a hook between the user hand and a presentation application such as Microsoft Powerpoint. Our interface is able to recognize grip and push gestures from the presenter. The result of this gesture recognition generates a signal to the presentation application, such as shortcuts to change slides or make use of additional tools. It is also possible to start and end the presentation by voice using our voice recognition tool. Additionally we show some tools that not only change the slides, but also provide more options to the presenter such as memo tools to directly highlight some parts of a slide, and even an eraser. This paper describes all the methodology and presents the result of our tests session. We are effectively able to improve the presentation capability of the presenter and think that such interface can be commercialized for presentation and other type of use.

Cite

CITATION STYLE

APA

Kim, J., Kim, S., Hong, K., Jean, D., & Jung, K. (2014). Presentation interface based on gesture and voice recognition. In Lecture Notes in Electrical Engineering (Vol. 308, pp. 75–81). Springer Verlag. https://doi.org/10.1007/978-3-642-54900-7_11

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free