With the introduction of artificial intelligence (AI) to healthcare, there is also a need for professional guidance to support its use. New (2022) reports from National Health Service AI Lab & Health Education England focus on healthcare workers’ understanding and confidence in AI clinical decision support systems (AI-CDDSs), and are concerned with developing trust in, and the trustworthiness of these systems. While they offer guidance to aid developers and purchasers of such systems, they offer little specific guidance for the clinical users who will be required to use them in patient care. This paper argues that clinical, professional and reputational safety will be risked if this deficit of professional guidance for clinical users of AI-CDDSs is not redressed. We argue it is not enough to develop training for clinical users without first establishing professional guidance regarding the rights and expectations of clinical users. We conclude with a call to action for clinical regulators: to unite to draft guidance for users of AI-CDDS that helps manage clinical, professional and reputational risks. We further suggest that this exercise offers an opportunity to address fundamental issues in the use of AI-CDDSs; regarding, for example, the fair burden of responsibility for outcomes.
CITATION STYLE
Smith, H., Downer, J., & Ives, J. (2024). Clinicians and AI use: where is the professional guidance? Journal of Medical Ethics, 50(7), 437–441. https://doi.org/10.1136/jme-2022-108831
Mendeley helps you to discover research relevant for your work.