Algorithmic Fairness and Bias in Machine Learning Systems

0Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

In recent years, research into and concern over algorithmic fairness and bias in machine learning systems has grown significantly. It is vital to make sure that these systems are fair, impartial, and do not support discrimination or social injustices since machine learning algorithms are becoming more and more prevalent in decision-making processes across a variety of disciplines. This abstract gives a general explanation of the idea of algorithmic fairness, the difficulties posed by bias in machine learning systems, and different solutions to these problems. Algorithmic bias and fairness in machine learning systems are crucial issues in this regard that demand the attention of academics, practitioners, and policymakers. Building fair and unbiased machine learning systems that uphold equality and prevent discrimination requires addressing biases in training data, creating fairness-aware algorithms, encouraging transparency and interpretability, and encouraging diversity and inclusivity.

Cite

CITATION STYLE

APA

Chandra, R., Sanjaya, K., Aravind, A. R., Abbas, A. R., Gulrukh, R., & Kumar, T. S. S. (2023). Algorithmic Fairness and Bias in Machine Learning Systems. In E3S Web of Conferences (Vol. 399). EDP Sciences. https://doi.org/10.1051/e3sconf/202339904036

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free