Ethical assessments and mitigation strategies for biases in AI-systems used during the COVID-19 pandemic

15Citations
Citations of this article
60Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The main aim of this article is to reflect on the impact of biases related to artificial intelligence (AI) systems developed to tackle issues arising from the COVID-19 pandemic, with special focus on those developed for triage and risk prediction. A secondary aim is to review assessment tools that have been developed to prevent biases in AI systems. In addition, we provide a conceptual clarification for some terms related to biases in this particular context. We focus mainly on non-racial biases that may be less considered when addressing biases in AI systems in the existing literature. In the manuscript, we found that the existence of bias in AI systems used for COVID-19 can result in algorithmic justice and that the legal frameworks and strategies developed to prevent the apparition of bias have failed to adequately consider social determinants of health. Finally, we make some recommendations on how to include more diverse professional profiles in order to develop AI systems that increase the epistemic diversity needed to tackle AI biases during the COVID-19 pandemic and beyond.

Cite

CITATION STYLE

APA

de Manuel, A., Delgado, J., Parra Jounou, I., Ausín, T., Casacuberta, D., Cruz, M., … Puyol, A. (2023). Ethical assessments and mitigation strategies for biases in AI-systems used during the COVID-19 pandemic. Big Data and Society, 10(1). https://doi.org/10.1177/20539517231179199

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free