Artificial intelligence in clinical decision-making: Rethinking personal moral responsibility

4Citations
Citations of this article
36Readers
Mendeley users who have this article in their library.

Abstract

Artificially intelligent systems (AISs) are being created by software developing companies (SDCs) to influence clinical decision-making. Historically, clinicians have led healthcare decision-making, and the introduction of AISs makes SDCs novel actors in the clinical decision-making space. Although these AISs are intended to influence a clinician's decision-making, SDCs have been clear that clinicians are in fact the final decision-makers in clinical care, and that AISs can only inform their decisions. As such, the default position is that clinicians should hold responsibility for the outcomes of the use of AISs. This is not the case when an AIS has influenced a clinician's judgement and their subsequent decision. In this paper, we argue that this is an imbalanced and unjust position, and that careful thought needs to go into how personal moral responsibility for the use of AISs in clinical decision-making should be attributed. This paper employs and examines the difference between prospective and retrospective responsibility and considers foreseeability as key in determining how personal moral responsibility can be justly attributed. This leads us to the view that moral responsibility for the outcomes of using AISs in healthcare ought to be shared by the clinical users and SDCs.

Cite

CITATION STYLE

APA

Smith, H., Birchley, G., & Ives, J. (2024). Artificial intelligence in clinical decision-making: Rethinking personal moral responsibility. Bioethics, 38(1), 78–86. https://doi.org/10.1111/bioe.13222

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free