Multimodal media center interface based on speech, gestures and haptic feedback

3Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We present a multimodal media center interface based on speech input, gestures, and haptic feedback (hapticons). In addition, the application includes a zoomable context + focus GUI in tight combination with speech output. The resulting interface is designed for and evaluated with different user groups, including visually and physically impaired users. Finally, we present the key results from its user evaluation and public pilot studies. © 2009 Springer Berlin Heidelberg.

Cite

CITATION STYLE

APA

Turunen, M., Hakulinen, J., Hella, J., Rajaniemi, J. P., Melto, A., Mäkinen, E., … Raisamo, R. (2009). Multimodal media center interface based on speech, gestures and haptic feedback. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5727 LNCS, pp. 54–57). https://doi.org/10.1007/978-3-642-03658-3_9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free