Artificial intelligence and clinical decision support: Clinicians' perspectives on trust, trustworthiness, and liability

65Citations
Citations of this article
121Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Artificial intelligence (AI) could revolutionise health care, potentially improving clinician decision making and patient safety, and reducing the impact of workforce shortages. However, policymakers and regulators have concerns over whether AI and clinical decision support systems (CDSSs) are trusted by stakeholders, and indeed whether they are worthy of trust. Yet, what is meant by trust and trustworthiness is often implicit, and it may not be clear who or what is being trusted. We address these lacunae, focusing largely on the perspective(s) of clinicians on trust and trustworthiness in AI and CDSSs. Empirical studies suggest that clinicians' concerns about their use include the accuracy of advice given and potential legal liability if harm to a patient occurs. Onora O'Neill's conceptualisation of trust and trustworthiness provides the framework for our analysis, generating a productive understanding of clinicians' reported trust issues. Through unpacking these concepts, we gain greater clarity over the meaning ascribed to them by stakeholders; delimit the extent to which stakeholders are talking at cross purposes; and promote the continued utility of trust and trustworthiness as useful concepts in current debates around the use of AI and CDSSs.

Cite

CITATION STYLE

APA

Jones, C., Thornton, J., & Wyatt, J. C. (2023). Artificial intelligence and clinical decision support: Clinicians’ perspectives on trust, trustworthiness, and liability. In Medical Law Review (Vol. 31, pp. 501–520). Oxford University Press. https://doi.org/10.1093/medlaw/fwad013

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free