Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator

158Citations
Citations of this article
184Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Objective: Implementation of machine learning (ML) may be limited by patients' right to "meaningful information about the logic involved"when ML influences healthcare decisions. Given the complexity of healthcare decisions, it is likely that ML outputs will need to be understood and trusted by physicians, and then explained to patients. We therefore investigated the association between physician understanding of ML outputs, their ability to explain these to patients, and their willingness to trust the ML outputs, using various ML explainability methods. Materials and Methods: We designed a survey for physicians with a diagnostic dilemma that could be resolved by an ML risk calculator. Physicians were asked to rate their understanding, explainability, and trust in response to 3 different ML outputs. One ML output had no explanation of its logic (the control) and 2 ML outputs used different model-agnostic explainability methods. The relationships among understanding, explainability, and trust were assessed using Cochran-Mantel-Haenszel tests of association. Results: The survey was sent to 1315 physicians, and 170 (13%) provided completed surveys. There were significant associations between physician understanding and explainability (P

Cite

CITATION STYLE

APA

Diprose, W. K., Buist, N., Hua, N., Thurier, Q., Shand, G., & Robinson, R. (2020). Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association, 27(4), 592–600. https://doi.org/10.1093/jamia/ocz229

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free