When can we Kick (Some) Humans “Out of the Loop”? An Examination of the use of AI in Medical Imaging for Lumbar Spinal Stenosis

0Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Artificial intelligence (AI) has attracted an increasing amount of attention, both positive and negative. Its potential applications in healthcare are indeed manifold and revolutionary, and within the realm of medical imaging and radiology (which will be the focus of this paper), significant increases in accuracy and speed, as well as significant savings in cost, stand to be gained through the adoption of this technology. Because of its novelty, a norm of keeping humans “in the loop” wherever AI mechanisms are deployed has become synonymous with good ethical practice in some circles. It has been argued that keeping humans “in the loop” is important for reasons of safety, accountability, and the maintenance of institutional trust. However, as the application of machine learning for the detection of lumbar spinal stenosis (LSS) in this paper’s case study reveals, there are some scenarios where an insistence on keeping humans in the loop (or in other words, the resistance to automation) seems unwarranted and could possibly lead us to miss out on very real and important opportunities in healthcare—particularly in low-resource settings. It is important to acknowledge these opportunity costs of resisting automation in such contexts, where better options may be unavailable. Using an AI model based on convolutional neural networks developed by a team of researchers at NUH/NUS medical school in Singapore for automated detection and classification of the lumbar spinal canal, lateral recess, and neural foraminal narrowing in an MRI scan of the spine to diagnose LSS, we will aim to demonstrate that where certain criteria hold (e.g., the AI is as accurate or better than human experts, risks are low in the event of an error, the gain in wellbeing is significant, and the task being automated is not essentially or importantly human), it is both morally permissible and even desirable to kick the humans out of the loop.

Cite

CITATION STYLE

APA

Muyskens, K., Ma, Y., Menikoff, J., Hallinan, J., & Savulescu, J. (2024). When can we Kick (Some) Humans “Out of the Loop”? An Examination of the use of AI in Medical Imaging for Lumbar Spinal Stenosis. Asian Bioethics Review. https://doi.org/10.1007/s41649-024-00290-9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free