Gender-tuning: Empowering Fine-tuning for Debiasing Pre-trained Language Models

7Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.

Abstract

Recent studies have revealed that the widely-used Pre-trained Language Models (PLMs) propagate societal biases from the large unmoderated pre-training corpora. Existing solutions require debiasing training processes and datasets for debiasing, which are resource-intensive and costly. Furthermore, these methods hurt the PLMs' performance on downstream tasks. In this study, we propose Gender-tuning, which debiases the PLMs through fine-tuning on downstream tasks' datasets. For this aim, Gender-tuning integrates Masked Language Modeling (MLM) training objectives into fine-tuning's training process. Comprehensive experiments show that Gender-tuning outperforms the state-of-the-art baselines in terms of average gender bias scores in PLMs while improving PLMs' performance on downstream tasks solely using the downstream tasks' dataset. Also, Gender-tuning is a deployable debiasing tool for any PLM that works with original fine-tuning.

References Powered by Scopus

SQuad: 100,000+ questions for machine comprehension of text

3973Citations
N/AReaders
Get full text

On the dangers of stochastic parrots: Can language models be too big?

2940Citations
N/AReaders
Get full text

Semantics derived automatically from language corpora contain human-like biases

1860Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Bias and Fairness in Large Language Models: A Survey

48Citations
N/AReaders
Get full text

Can We Debias Multimodal Large Language Models via Model Editing?

0Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Ghanbarzadeh, S., Huang, Y., Palangi, H., Moreno, R. C., & Khanpour, H. (2023). Gender-tuning: Empowering Fine-tuning for Debiasing Pre-trained Language Models. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 5448–5458). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-acl.336

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 4

67%

Lecturer / Post doc 1

17%

Researcher 1

17%

Readers' Discipline

Tooltip

Computer Science 8

89%

Medicine and Dentistry 1

11%

Save time finding and organizing research with Mendeley

Sign up for free