Artificial intelligence in clinical decision-making: Rethinking liability

37Citations
Citations of this article
72Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

This article theorises, within the context of the law of England and Wales, the potential outcomes in negligence claims against clinicians and software development companies (SDCs) by patients injured due to AI system (AIS) use with human clinical supervision. Currently, a clinician will likely shoulder liability via a negligence claim for allowing defects in an AIS’s outputs to reach patients. We question if this is ‘fair, just and reasonable’ to clinical users: we argue that a duty of care to patients ought to be recognised on the part of SDCs as well as clinicians. As an alternative to negligence claims, we propose ‘risk pooling’ which utilises insurance. Here, a fairer construct of shared responsibility for AIS use could be created between the clinician and the SDC; thus, allowing a rapid mechanism of compensation to injured patients via insurance.

Cite

CITATION STYLE

APA

Smith, H., & Fotheringham, K. (2020). Artificial intelligence in clinical decision-making: Rethinking liability. Medical Law International, 20(2), 131–154. https://doi.org/10.1177/0968533220945766

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free