Intelligent decision support in medical triage: are people robust to biased advice?

9Citations
Citations of this article
38Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Background: Intelligent artificial agents ('agents') have emerged in various domains of human society (healthcare, legal, social). Since using intelligent agents can lead to biases, a common proposed solution is to keep the human in the loop. Will this be enough to ensure unbiased decision making? Methods: To address this question, an experimental testbed was developed in which a human participant and an agent collaboratively conduct triage on patients during a pandemic crisis. The agent uses data to support the human by providing advice and extra information about the patients. In one condition, the agent provided sound advice; the agent in the other condition gave biased advice. The research question was whether participants neutralized bias from the biased artificial agent. Results: Although it was an exploratory study, the data suggest that human participants may not be sufficiently in control to correct the agent's bias. Conclusions: This research shows how important it is to design and test for human control in concrete human-machine collaboration contexts. It suggests that insufficient human control can potentially result in people being unable to detect biases in machines and thus unable to prevent machine biases from affecting decisions.

Cite

CITATION STYLE

APA

Van Der Stigchel, B., Van Den Bosch, K., Van Diggelen, J., & Haselager, P. (2023). Intelligent decision support in medical triage: are people robust to biased advice? Journal of Public Health (United Kingdom), 45(3), 689–696. https://doi.org/10.1093/pubmed/fdad005

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free