Building multimodal interfaces out of executable, model-based interactors and mappings

7Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Future interaction will be embedded into smart environments offering the user to choose and to combine a heterogeneous set of interaction devices and modalities based on his preferences realizing an ubiquitous and multimodal access. We propose a model-based runtime environment (the MINT Framework) that describes multimodal interaction by interactors and multimodal mappings. The interactors are modeled by using state machines and describe user interface elements for various modalities. Mappings combine these interactors with interaction devices and support the definition of multimodal relations. We prove our implementation by modeling a multimodal navigation supporting pointing and hand gestures. We additionally present the flexibility of our approach that supports modeling of common interaction paradigms such as drag-and-drop as well. © 2011 Springer-Verlag.

Cite

CITATION STYLE

APA

Feuerstack, S., & Pizzolato, E. (2011). Building multimodal interfaces out of executable, model-based interactors and mappings. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6761 LNCS, pp. 221–228). https://doi.org/10.1007/978-3-642-21602-2_25

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free