Comprehension and engagement in survey interviews with virtual agents

23Citations
Citations of this article
107Readers
Mendeley users who have this article in their library.

Abstract

This study investigates how an onscreen virtual agent's dialog capability and facial animation affect survey respondents' comprehension and engagement in "face-to-face" interviews, using questions from US government surveys whose results have far-reaching impact on national policies. In the study, 73 laboratory participants were randomly assigned to respond in one of four interviewing conditions, in which the virtual agent had either high or low dialog capability (implemented through Wizard of Oz) and high or low facial animation, based on motion capture from a human interviewer. Respondents, whose faces were visible to the Wizard (and videorecorded) during the interviews, answered 12 questions about housing, employment, and purchases on the basis of fictional scenarios designed to allow measurement of comprehension accuracy, defined as the fit between responses and US government definitions. Respondents answered more accurately with the high-dialog-capability agents, requesting clarification more often particularly for ambiguous scenarios; and they generally treated the high-dialog-capability interviewers more socially, looking at the interviewer more and judging high-dialog-capability agents as more personal and less distant. Greater interviewer facial animation did not affect response accuracy, but it led to more displays of engagement-acknowledgments (verbal and visual) and smiles-and to the virtual interviewer's being rated as less natural. The pattern of results suggests that a virtual agent's dialog capability and facial animation differently affect survey respondents' experience of interviews, behavioral displays, and comprehension, and thus the accuracy of their responses. The pattern of results also suggests design considerations for building survey interviewing agents, which may differ depending on the kinds of survey questions (sensitive or not) that are asked.

References Powered by Scopus

Likert scales, levels of measurement and the "laws" of statistics

2829Citations
N/AReaders
Get full text

Adolescent sexual behavior, drug use, and violence: Increased reporting with computer survey technology

1776Citations
N/AReaders
Get full text

Referring as a collaborative process

1575Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Explaining interviewer effects: A research synthesis

205Citations
N/AReaders
Get full text

Comparing data from chatbot and web surveys effects of platform and conversational style on survey response quality

137Citations
N/AReaders
Get full text

The influence of conversational agent embodiment and conversational relevance on socially desirable responding

92Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Conrad, F. G., Schober, M. F., Jans, M., Orlowski, R. A., Nielsen, D., & Levenstein, R. (2015). Comprehension and engagement in survey interviews with virtual agents. Frontiers in Psychology, 6(OCT). https://doi.org/10.3389/fpsyg.2015.01578

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 46

66%

Researcher 11

16%

Professor / Associate Prof. 10

14%

Lecturer / Post doc 3

4%

Readers' Discipline

Tooltip

Psychology 17

30%

Computer Science 16

28%

Social Sciences 13

23%

Business, Management and Accounting 11

19%

Article Metrics

Tooltip
Social Media
Shares, Likes & Comments: 18

Save time finding and organizing research with Mendeley

Sign up for free