A model of shortcut usage in multimodal human-computer interaction

0Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Users of multimodal systems have to choose between different interaction strategies. Thereby the number of interaction steps to solve a task can vary across the available modalities. In this work we introduce such a task and present empirical data that shows that strategy selection of users is affected by modality specific shortcuts. The system under investigation offered touch screen and speech as input modalities. We introduce a first version of an ACT-R model that uses the architectures-inherent mechanisms production compilation and utility learning to identify modality-specific shortcuts. A simple task analysis is implemented in declarative memory. The model reasonably accurate matches the human data. In our further work we will try to get a better fit by extending the model with further influence factors of modality selection like speech recognition errors. Further the model will be refined concerning the cognitive processes of speech production and touch screen interaction. © 2011 Springer-Verlag.

Cite

CITATION STYLE

APA

Schaffer, S., Schleicher, R., & Möller, S. (2011). A model of shortcut usage in multimodal human-computer interaction. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6777 LNCS, pp. 337–346). https://doi.org/10.1007/978-3-642-21799-9_38

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free