An explosion in the popularity of transformer-based language models (such as GPT-3, BERT, RoBERTa, and ALBERT) has opened the doors to new machine learning applications involving language modeling, text generation, and more. However, recent scrutiny reveals that these language models contain inherent biases towards certain demographics reflected in their training data. While research has tried mitigating this problem, existing approaches either fail to remove the bias completely, degrade performance (“catastrophic forgetting”), or are costly to execute. This work examines how to reduce gender bias in a GPT-2 language model by fine-tuning less than 1% of its parameters. Through quantitative benchmarks, we show that this is a viable way to reduce prejudice in pre-trained language models while remaining cost-effective at scale.
CITATION STYLE
Gira, M., Zhang, R., & Lee, K. (2022). Debiasing Pre-Trained Language Models via Efficient Fine-Tuning. In LTEDI 2022 - 2nd Workshop on Language Technology for Equality, Diversity and Inclusion, Proceedings of the Workshop (pp. 59–69). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.ltedi-1.8
Mendeley helps you to discover research relevant for your work.