Investigating Sources and Effects of Bias in AI-Based Systems – Results from an MLR

0Citations
Citations of this article
N/AReaders
Mendeley users who have this article in their library.
Get full text

Abstract

AI-based systems are becoming increasingly prominent in everyday life, from smart assistants like Amazon’s Alexa to their use in the healthcare industry. With this rise, the evidence of bias in AI-based systems has also been witnessed. The effects of this bias on the groups of people targeted can range from inconvenient to life-threatening. As AI-based systems continue to be developed and used, it is important that this bias should be eliminated as much as possible. Through the findings of a multivocal literature review (MLR), we aim to understand what AI-based systems are, what bias is and the types of bias these systems have, the potential risks and effects of this bias, and how to reduce bias in AI-based systems. In conclusion, addressing and mitigating biases in AI-based systems is crucial for fostering equitable and trustworthy applications; by proactively identifying these biases and implementing strategies to counteract them, we can contribute to the development of more responsible and inclusive AI technologies that benefit all users.

Author supplied keywords

Cite

CITATION STYLE

APA

De Buitlear, C., Byrne, A., McEvoy, E., Camara, A., Yilmaz, M., McCarren, A., & Clarke, P. M. (2023). Investigating Sources and Effects of Bias in AI-Based Systems – Results from an MLR. In Communications in Computer and Information Science (Vol. 1890 CCIS, pp. 20–35). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-42307-9_2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free