Accessible speech-based and multimodal media center interface for users with physical disabilities

1Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present a multimodal media center user interface with a hands-free speech recognition input method for users with physical disabilities. In addition to speech input, the application features a zoomable context + focus graphical user interface and several other modalities, including speech output, haptic feedback, and gesture input. These features have been developed in co-operation with representatives from the target user groups. In this article, we focus on the speech input interface and its evaluations. We discuss the user interface design and results from a long-term pilot study taking place in homes of physically disabled users, and compare the results to a public pilot study and laboratory studies carried out with non-disabled users. © 2010 Springer-Verlag.

Cite

CITATION STYLE

APA

Turunen, M., Hakulinen, J., Melto, A., Hella, J., Laivo, T., Rajaniemi, J. P., … Raisamo, R. (2010). Accessible speech-based and multimodal media center interface for users with physical disabilities. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5967 LNCS, pp. 66–79). Springer Verlag. https://doi.org/10.1007/978-3-642-12397-9_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free