“Garbage In, Garbage Out”: Mitigating Human Biases in Data Entry by Means of Artificial Intelligence

1Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Current HCI research often focuses on mitigating algorithmic biases. While such algorithmic fairness during model training is worthwhile, we see fit to mitigate human cognitive biases earlier, namely during data entry. We developed a conversational agent with voice-based data entry and visualization to support financial consultations, which are human-human settings with information asymmetries. In a pre-study, we reveal data-entry biases in advisors by a quantitative analysis of 5 advisors consulting 15 clients in total. Our main study evaluates the conversational agent with 12 advisors and 24 clients. A thematic analysis of interviews shows that advisors introduce biases by “feeling” and “forgetting” data. Additionally, the conversational agent makes financial consultations more transparent and automates data entry. These findings may be transferred to various dyads, such as doctor visits. Finally, we stress that AI not only poses a risk of becoming a mirror of human biases but also has the potential to intervene in the early stages of data entry.

Cite

CITATION STYLE

APA

Eckhardt, S., Knaeble, M., Bucher, A., Staehelin, D., Dolata, M., Agotai, D., & Schwabe, G. (2023). “Garbage In, Garbage Out”: Mitigating Human Biases in Data Entry by Means of Artificial Intelligence. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 14144 LNCS, pp. 27–48). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-42286-7_2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free