Medical Informatics in a Tension between Black-Box AI and Trust

16Citations
Citations of this article
40Readers
Mendeley users who have this article in their library.

Abstract

For medical informaticians, it became more and more crucial to assess the benefits and disadvantages of AI-based solutions as promising alternatives for many traditional tools. Besides quantitative criteria such as accuracy and processing time, healthcare providers are often interested in qualitative explanations of the solutions. Explainable AI provides methods and tools, which are interpretable enough that it affords different stakeholders a qualitative understanding of its solutions. Its main purpose is to provide insights into the black-box mechanism of machine learning programs. Our goal here is to advance the problem of qualitatively assessing AI from the perspective of medical informaticians by providing insights into the central notions, namely: explainability, interpretability, understanding, trust, and confidence.

Cite

CITATION STYLE

APA

Sariyar, M., & Holm, J. (2022). Medical Informatics in a Tension between Black-Box AI and Trust. In Studies in Health Technology and Informatics (Vol. 289, pp. 41–44). IOS Press BV. https://doi.org/10.3233/SHTI210854

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free