Using virtual agents to guide attention in multi-task scenarios

3Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Humans have the ability to efficiently decode human and human-like cues. We explore whether a virtual agent's facial expressions and gaze can be used to guide attention and elicit amplified processing of task-related cues. We argue that an emphasis on information processing will support future development of assistance systems, for example by reducing task load and creating a sense of reliability for such systems. A pilot study indicates subjects' propensity to respond to the agent's cues, most importantly gaze, but to not yet rely on them completely, possibly leading to a decreased performance. © 2013 Springer-Verlag.

Cite

CITATION STYLE

APA

Kulms, P., & Kopp, S. (2013). Using virtual agents to guide attention in multi-task scenarios. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8108 LNAI, pp. 295–302). https://doi.org/10.1007/978-3-642-40415-3_26

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free