We present a multimodal media center interface based on speech input, gestures, and haptic feedback (hapticons). In addition, the application includes a zoomable context + focus GUI in tight combination with speech output. The resulting interface is designed for and evaluated with different user groups, including visually and physically impaired users. Finally, we present the key results from its user evaluation and public pilot studies. © 2009 Springer Berlin Heidelberg.
CITATION STYLE
Turunen, M., Hakulinen, J., Hella, J., Rajaniemi, J. P., Melto, A., Mäkinen, E., … Raisamo, R. (2009). Multimodal media center interface based on speech, gestures and haptic feedback. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5727 LNCS, pp. 54–57). https://doi.org/10.1007/978-3-642-03658-3_9
Mendeley helps you to discover research relevant for your work.