Context-aware querying for multimodal search engines

6Citations
Citations of this article
33Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Multimodal interaction provides the user with multiple modes of interacting with a system, such as gestures, speech, text, video, audio, etc. A multimodal system allows for several distinct means for input and output of data. In this paper, we present our work in the context of the I-SEARCH project, which aims at enabling context-aware querying of a multimodal search framework including real-world data such as user location or temperature. We introduce the concepts of MuSeBag for multimodal query interfaces, UIIFace for multimodal interaction handling, and CoFind for collaborative search as the core components behind the I-SEARCH multimodal user interface, which we evaluate via a user study. © 2012 Springer-Verlag.

Cite

CITATION STYLE

APA

Etzold, J., Brousseau, A., Grimm, P., & Steiner, T. (2012). Context-aware querying for multimodal search engines. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7131 LNCS, pp. 728–739). https://doi.org/10.1007/978-3-642-27355-1_77

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free