Abstract
Artificial Intelligence (AI) biases are becoming prominent today with the widespread and extensive use of AI for autonomous decision-making systems. Bias in AI can exist in many ways- from age discrimination and recruiting inequality to racial prejudices and gender differentiation. These biases severely impact various levels, leading to discrimination and faulty decision-making. The research aims to systematically explore and investigate the pervasiveness of the AI bias impacts by collecting, analysing, and organizing these impacts into suitable categories for effective mitigation. An in-depth analysis is done using a systematic literature review process to gather and outline the variety of impacts discussed in the literature. Through our holistic qualitative analysis, the research reveals patterns in the types of bias impacts that can be categorized, from which a classification model is developed that places the impacts in 4 primary domains: fundamental rights, individuals and societies, the financial sector, and businesses and organizations. By identifying the impacts caused by AI bias and categorizing them using a systematic approach, a set of specific targeted mitigation strategies relative to the impact category can be identified and leveraged to assist in managing the risks of AI bias impacts. This study will benefit practitioners and automation engineers on a global scale who aim to develop transparent and inclusive AI systems .
Author supplied keywords
Cite
CITATION STYLE
Bansal, C., Pandey, K. K., Goel, R., Sharma, A., & Jangirala, S. (2023). Artificial intelligence (AI) bias impacts: classification framework for effective mitigation. Issues in Information Systems, 24(4), 367–389. https://doi.org/10.48009/4_iis_2023_128
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.