Trust in artificial intelligence: From a Foundational Trust Framework to emerging research opportunities

53Citations
Citations of this article
243Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

With the rise of artificial intelligence (AI), the issue of trust in AI emerges as a paramount societal concern. Despite increased attention of researchers, the topic remains fragmented without a common conceptual and theoretical foundation. To facilitate systematic research on this topic, we develop a Foundational Trust Framework to provide a conceptual, theoretical, and methodological foundation for trust research in general. The framework positions trust in general and trust in AI specifically as a problem of interaction among systems and applies systems thinking and general systems theory to trust and trust in AI. The Foundational Trust Framework is then used to gain a deeper understanding of the nature of trust in AI. From doing so, a research agenda emerges that proposes significant questions to facilitate further advances in empirical, theoretical, and design research on trust in AI.

Cite

CITATION STYLE

APA

Lukyanenko, R., Maass, W., & Storey, V. C. (2022, December 1). Trust in artificial intelligence: From a Foundational Trust Framework to emerging research opportunities. Electronic Markets. Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/s12525-022-00605-4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free