Along with the increasing development of information technology, the interaction between artificial intelligence and humans is becoming even more frequent. In this context, a phenomenon called “medical AI aversion” has emerged, in which the same behaviors of medical AI and humans elicited different responses. Medical AI aversion can be understood in terms of the way that people attribute mind capacities to different targets. It has been demonstrated that when medical professionals dehumanize patients—making fewer mental attributions to patients and, to some extent, not perceiving and treating them as full human—it leads to more painful and effective treatment options. From the patient’s perspective, will painful treatment options be unacceptable when they perceive the doctor as a human but disregard his or her own mental abilities? Is it possible to accept a painful treatment plan because the doctor is artificial intelligence? Based on the above, the current study investigated the above questions and the phenomenon of medical AI aversion in a medical context. Through three experiments it was found that: (1) human doctor was accepted more when patients were faced with the same treatment plan; (2) there was an interactional effect between the treatment subject and the nature of the treatment plan, and, therefore, affected the acceptance of the treatment plan; and (3) experience capacities mediated the relationship between treatment provider (AI vs. human) and treatment plan acceptance. Overall, this study attempted to explain the phenomenon of medical AI aversion from the mind perception theory and the findings are revealing at the applied level for guiding the more rational use of AI and how to persuade patients.
CITATION STYLE
Wu, J., Xu, L., Yu, F., & Peng, K. (2022). Acceptance of medical treatment regimens provided by ai vs. Human. Applied Sciences (Switzerland), 12(1). https://doi.org/10.3390/app12010110
Mendeley helps you to discover research relevant for your work.