Examining Religion Bias in AI Text Generators

13Citations
Citations of this article
26Readers
Mendeley users who have this article in their library.
Get full text

Abstract

One of the biggest reasons artificial intelligence (AI) gets a backlash is because of inherent biases in AI software. Deep learning algorithms use data fed into the systems to find patterns to draw conclusions used to make application decisions. Patterns in data fed into machine learning algorithms have revealed that the AI software decisions have biases embedded within them. Algorithmic audits can certify that the software is making responsible decisions. These audits verify the standards centered around the various AI principles such as explainability, accountability, human-centered values, such as, fairness and transparency, to increase the trust in the algorithm and the software systems that implement AI algorithms.

Cite

CITATION STYLE

APA

Muralidhar, D. (2021). Examining Religion Bias in AI Text Generators. In AIES 2021 - Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (pp. 273–274). Association for Computing Machinery, Inc. https://doi.org/10.1145/3461702.3462469

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free