Unsupervised Mitigation of Gender Bias by Character Components: A Case Study of Chinese Word Embedding

7Citations
Citations of this article
32Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Word embeddings learned from massive text collections have demonstrated significant levels of discriminative biases. However, debiasing on the Chinese language, one of the most spoken languages, has been less explored. Meanwhile, existing literature relies on manually created supplementary data, which is time- and energy-consuming. In this work, we propose the first Chinese Gender-neutral word Embedding model (CGE) based on Word2vec, which learns gender-neutral word embeddings without any labeled data. Concretely, CGE utilizes and emphasizes the rich feminine and masculine information contained in radicals, i.e., a kind of component in Chinese characters, during the training procedure. This consequently alleviates discriminative gender biases. Experimental results show that our unsupervised method outperforms the state-of-the-art supervised debiased word embedding models without sacrificing the functionality of the embedding model.

Cite

CITATION STYLE

APA

Chen, X., Li, M., Yan, R., Gao, X., & Zhang, X. (2022). Unsupervised Mitigation of Gender Bias by Character Components: A Case Study of Chinese Word Embedding. In GeBNLP 2022 - 4th Workshop on Gender Bias in Natural Language Processing, Proceedings of the Workshop (pp. 121–128). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.gebnlp-1.14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free