A review of trust in artificial intelligence: Challenges, vulnerabilities and future directions

120Citations
Citations of this article
329Readers
Mendeley users who have this article in their library.

Abstract

Artificial Intelligence (AI) can benefit society, but it is also fraught with risks. Societal adoption of AI is recognized to depend on stakeholder trust in AI, yet the literature on trust in AI is fragmented, and little is known about the vulnerabilities faced by different stakeholders, making it is difficult to draw on this evidence-base to inform practice and policy. We undertake a literature review to take stock of what is known about the antecedents of trust in AI, and organize our findings around five trust challenges unique to or exacerbated by AI. Further, we develop a concept matrix identifying the key vulnerabilities to stakeholders raised by each of the challenges, and propose a multi-stakeholder approach to future research.

Cite

CITATION STYLE

APA

Lockey, S., Gillespie, N., Holm, D., & Someh, I. A. (2021). A review of trust in artificial intelligence: Challenges, vulnerabilities and future directions. In Proceedings of the Annual Hawaii International Conference on System Sciences (Vol. 2020-January, pp. 5463–5472). IEEE Computer Society. https://doi.org/10.24251/hicss.2021.664

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free