Biases Affecting Human Decision Making in AI-Supported Second Opinion Settings

13Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper we focus on a still neglected consequence of the adoption of AI in diagnostic settings: the increase of cases in which a human decision maker is called to settle a divergence between a human doctor and the AI, i.e., second opinion requests. We designed a user study, involving more than 70 medical doctors, to understand if the second opinions are affected by the first ones and whether the decision makers tend to trust the human interpretation more than the machine’s one. We observed significant effects on decision accuracy and a sort of “prejudice against the machine”, which varies with respect to the respondent profile. Some implications for sounder second opinion settings are given in the light of the results of this study.

Cite

CITATION STYLE

APA

Cabitza, F. (2019). Biases Affecting Human Decision Making in AI-Supported Second Opinion Settings. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11676 LNAI, pp. 283–294). Springer Verlag. https://doi.org/10.1007/978-3-030-26773-5_25

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free