Breaking the Bias: Gender Fairness in LLMs Using Prompt Engineering and In-Context Learning

  • Dwivedi S
  • Ghosh S
  • et al.
N/ACitations
Citations of this article
34Readers
Mendeley users who have this article in their library.

Abstract

Large Language Models (LLMs) have been identified as carriers of societal biases, particularly in gender representation. This study introduces an innovative approach employing prompt engineering and in-context learning to rectify these biases in LLMs. Through our methodology, we effectively guide LLMs to generate more equitable content, emphasizing nuanced prompts and in-context feedback. Experimental results on openly available LLMs such as BARD, ChatGPT, and LLAMA2-Chat indicate a significant reduction in gender bias, particularly in traditionally problematic areas such as ‘Literature’. Our findings underscore the potential of prompt engineering and in-context learning as powerful tools in the quest for unbiased AI language models.

Cite

CITATION STYLE

APA

Dwivedi, S., Ghosh, S., & Dwivedi, S. (2023). Breaking the Bias: Gender Fairness in LLMs Using Prompt Engineering and In-Context Learning. Rupkatha Journal on Interdisciplinary Studies in Humanities, 15(4). https://doi.org/10.21659/rupkatha.v15n4.10

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free