When AIs say yes and i say no: On the tension between AI's decision and human's decision from the epistemological perspectives

2Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Let's start with a thought experiment. A patient is waiting in the clinic room for the diagnosis result to decide whether he needs brain surgery for his medical conditions. After SaMD processed, the result shows that the patient is classified into the highrisk group with 99.9% of death rates and needs brain surgery immediately. But the result is opposite to your diagnosis that the patient needs not the surgery. Will you, as a physician in this scenario, object the result that SaMD has made? Theoretically, Human should be the one who determines all the decisions and takes AI's results for reference only, as the GDPR Article 22 presumes. But quite the opposite, AI's result has greater influences on Human than we thought. In this paper, I explore the tension between AI's decision and human decision from the Epistemological perspectives, i.e. to justify the reasons behind the positive human beliefs in AI. My conclusion is that positive human beliefs in AI are because we misidentified AI as a general technology, and only if we can recognize their differences correctly, then the requirement of "Human in the loop" in the GDPR Article 22 can have its meaning and function.

Cite

CITATION STYLE

APA

Ku, C. Y. (2019). When AIs say yes and i say no: On the tension between AI’s decision and human’s decision from the epistemological perspectives. Informacios Tarsadalom. Infonia. https://doi.org/10.22503/INFTARS.XIX.2019.4.5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free