Development of a Trust-Aware User Simulator for Statistical Proactive Dialog Modeling in Human-AI Teams

2Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The concept of a Human-AI team has gained increasing attention in recent years. For effective collaboration between humans and AI teammates, proactivity is crucial for close coordination and effective communication. However, the design of adequate proactivity for AI-based systems to support humans is still an open question and a challenging topic. In this paper, we present the development of a corpus-based user simulator for training and testing proactive dialog policies. The simulator incorporates informed knowledge about proactive dialog and its effect on user trust and simulates user behavior and personal information, including socio-demographic features and personality traits. Two different simulation approaches were compared, and a task-step-based approach yielded better overall results due to enhanced modeling of sequential dependencies. This research presents a promising avenue for exploring and evaluating appropriate proactive strategies in a dialog game setting for improving Human-AI teams.

Cite

CITATION STYLE

APA

Kraus, M., Riekenbrauck, R., & Minker, W. (2023). Development of a Trust-Aware User Simulator for Statistical Proactive Dialog Modeling in Human-AI Teams. In UMAP 2023 - Adjunct Proceedings of the 31st ACM Conference on User Modeling, Adaptation and Personalization (pp. 38–43). Association for Computing Machinery, Inc. https://doi.org/10.1145/3563359.3597403

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free