Artificial intelligence in medicine: trust it or (merely) rely on it?

0Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Problem: Trust and trustworthiness are usually highly valued in modern medicine. These phenomena are seen as components of a good patient–doctor relationship (P-D-R). In the field of artificial intelligence (AI), they are important regulatory reference points. However, it is not entirely clear what we mean when we talk about trust or trustworthiness in these areas. Argumentation: It therefore seems worthwhile to explore the meaning of these terms and the implications for debates in medical ethics. It is argued here that trust involves a complex, noncontrolling, interpersonal attitude implying that predicating trust on nonpersonal entities—such as AI—is a kind of category mistake. In the context of evaluating AI, reliability is a much more appropriate candidate. Conclusion: In turn, the reliability of AI should be proven and controlled in according to the idea of technovigilance, especially with a system view, which implies the control of humans in the loop, i.e., doctors. However, this technovigilance-related control of doctors does not undermine a good P‑D‑R in which trust can have a good place.

Cite

CITATION STYLE

APA

Hiekel, S. (2025). Artificial intelligence in medicine: trust it or (merely) rely on it? Ethik in Der Medizin, 37(4), 515–532. https://doi.org/10.1007/s00481-025-00872-7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free