Examining Trust in Conversational Systems: Conceptual and Empirical Findings on User Trust, Related Behavior, and System Trustworthiness

0Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Machine learning (ML)-based conversational systems represent a value enabler for human-machine interaction. Simultaneously, the opacity, complexity, and humanness accompanied by such systems introduce their own issues, including trust misalignment. While trust is viewed as a prerequisite for effective system use, few studies have considered calibrating for appropriate trust, and empirically testing the relationship between trust and related behavior. Moreover, the desired implications of transparency-enhancing design cues are ambiguous. My research aims to explore the impact of system performance on trust, the dichotomy between trust and behavior, and how transparency might help attenuate the effects caused by low system performance in the specific context of decision-making tasks assisted by ML-based conversational systems.

Cite

CITATION STYLE

APA

Schmitt, A. (2022). Examining Trust in Conversational Systems: Conceptual and Empirical Findings on User Trust, Related Behavior, and System Trustworthiness. In AIES 2022 - Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society (p. 912). Association for Computing Machinery, Inc. https://doi.org/10.1145/3514094.3539525

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free