Gender Bias in Meta-Embeddings

6Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Different methods have been proposed to develop meta-embeddings from a given set of source embeddings. However, the source embeddings can contain unfair gender-related biases, and how these influence the meta-embeddings has not been studied yet. We study the gender bias in meta-embeddings created under three different settings: (1) meta-embedding multiple sources without performing any debiasing (Multi-Source No-Debiasing), (2) meta-embedding multiple sources debiased by a single method (Multi-Source Single-Debiasing), and (3) meta-embedding a single source debiased by different methods (Single-Source Multi-Debiasing). Our experimental results show that meta-embedding amplifies the gender biases compared to input source embeddings. We find that debiasing not only the sources but also their meta-embedding is needed to mitigate those biases. Moreover, we propose a novel debiasing method based on meta-embedding learning where we use multiple debiasing methods on a single source embedding and then create a single unbiased meta-embedding.

Cite

CITATION STYLE

APA

Kaneko, M., Bollegala, D., & Okazaki, N. (2022). Gender Bias in Meta-Embeddings. In Findings of the Association for Computational Linguistics: EMNLP 2022 (pp. 3118–3133). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-emnlp.227

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free