Gaze-X: Adaptive, affective, multimodal interface for single-user office scenarios

36Citations
Citations of this article
97Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper describes an intelligent system that we developed to support affective multimodal human-computer interaction (AMM-HCI) where the user's actions and emotions are modeled and then used to adapt the interaction and support the user in his or her activity. The proposed system, which we named Gaze-X, is based on sensing and interpretation of the human part of the computer's context, known as W5+ (who, where, what, when, why, how). It integrates a number of natural human communicative modalities including speech, eye gaze direction, face and facial expression, and a number of standard HCI modalities like keystrokes, mouse movements, and active software identification, which, in turn, are fed into processes that provide decision making and adapt the HCI to support the user in his or her activity according to his or her preferences. A usability study conducted in an office scenario with a number of users indicates that Gaze-X is perceived as effective, easy to use, useful, and affectively qualitative. © 2007 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Maat, L., & Pantic, M. (2007). Gaze-X: Adaptive, affective, multimodal interface for single-user office scenarios. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4451 LNAI, pp. 251–271). https://doi.org/10.1007/978-3-540-72348-6_13

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free