‘If You’re Going to Trust the Machine, Then That Trust Has Got to Be Based on Something’:

  • Winter P
  • Carusi A
N/ACitations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

The role of Artificial Intelligence (AI) in clinical decision-making raises issues of trust. One issue concerns the conditions of trusting the AI which tends to be based on validation. However, little attention has been given to how validation is formed, how comparisons come to be accepted, and how AI algorithms are trusted in decision-making. Drawing on interviews with collaborative researchers developing three AI technologies for the early diagnosis of pulmonary hypertension (PH), we show how validation of the AI is jointly produced so that trust in the algorithm is built up through the negotiation of criteria and terms of comparison during interactions. These processes build up interpretability and interrogation, and co-constitute trust in the technology. As they do so, it becomes difficult to sustain a strict distinction between artificial and human/social intelligence.

Cite

CITATION STYLE

APA

Winter, P., & Carusi, A. (2022). ‘If You’re Going to Trust the Machine, Then That Trust Has Got to Be Based on Something’: Science & Technology Studies. https://doi.org/10.23987/sts.102198

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free