Abstract
Word embeddings learned from massive text collections have demonstrated significant levels of discriminative biases. However, debiasing on the Chinese language, one of the most spoken languages, has been less explored. Meanwhile, existing literature relies on manually created supplementary data, which is time- and energy-consuming. In this work, we propose the first Chinese Gender-neutral word Embedding model (CGE) based on Word2vec, which learns gender-neutral word embeddings without any labeled data. Concretely, CGE utilizes and emphasizes the rich feminine and masculine information contained in radicals, i.e., a kind of component in Chinese characters, during the training procedure. This consequently alleviates discriminative gender biases. Experimental results show that our unsupervised method outperforms the state-of-the-art supervised debiased word embedding models without sacrificing the functionality of the embedding model.
Cite
CITATION STYLE
Chen, X., Li, M., Yan, R., Gao, X., & Zhang, X. (2022). Unsupervised Mitigation of Gender Bias by Character Components: A Case Study of Chinese Word Embedding. In GeBNLP 2022 - 4th Workshop on Gender Bias in Natural Language Processing, Proceedings of the Workshop (pp. 121–128). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.gebnlp-1.14
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.