A framework for evaluating clinical artificial intelligence systems without ground-truth annotations

7Citations
Citations of this article
38Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

A clinical artificial intelligence (AI) system is often validated on data withheld during its development. This provides an estimate of its performance upon future deployment on data in the wild; those currently unseen but are expected to be encountered in a clinical setting. However, estimating performance on data in the wild is complicated by distribution shift between data in the wild and withheld data and the absence of ground-truth annotations. Here, we introduce SUDO, a framework for evaluating AI systems on data in the wild. Through experiments on AI systems developed for dermatology images, histopathology patches, and clinical notes, we show that SUDO can identify unreliable predictions, inform the selection of models, and allow for the previously out-of-reach assessment of algorithmic bias for data in the wild without ground-truth annotations. These capabilities can contribute to the deployment of trustworthy and ethical AI systems in medicine.

Cite

CITATION STYLE

APA

Kiyasseh, D., Cohen, A., Jiang, C., & Altieri, N. (2024). A framework for evaluating clinical artificial intelligence systems without ground-truth annotations. Nature Communications, 15(1). https://doi.org/10.1038/s41467-024-46000-9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free