Open questions and research gaps for monitoring and updating AI-enabled tools in clinical settings

5Citations
Citations of this article
34Readers
Mendeley users who have this article in their library.

Abstract

As the implementation of artificial intelligence (AI)-enabled tools is realized across diverse clinical environments, there is a growing understanding of the need for ongoing monitoring and updating of prediction models. Dataset shift—temporal changes in clinical practice, patient populations, and information systems—is now well-documented as a source of deteriorating model accuracy and a challenge to the sustainability of AI-enabled tools in clinical care. While best practices are well-established for training and validating new models, there has been limited work developing best practices for prospective validation and model maintenance. In this paper, we highlight the need for updating clinical prediction models and discuss open questions regarding this critical aspect of the AI modeling lifecycle in three focus areas: model maintenance policies, performance monitoring perspectives, and model updating strategies. With the increasing adoption of AI-enabled tools, the need for such best practices must be addressed and incorporated into new and existing implementations. This commentary aims to encourage conversation and motivate additional research across clinical and data science stakeholders.

Cite

CITATION STYLE

APA

Davis, S. E., Walsh, C. G., & Matheny, M. E. (2022). Open questions and research gaps for monitoring and updating AI-enabled tools in clinical settings. Frontiers in Digital Health, 4. https://doi.org/10.3389/fdgth.2022.958284

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free