Gender Bias Hidden behind ChineseWord Embeddings: The Case of Chinese Adjectives

6Citations
Citations of this article
55Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Gender bias in word embeddings gradually becomes a vivid research field in recent years. Most studies in this field aim at measurement and debiasing methods with English as the target language. This paper investigates gender bias in static word embeddings from a unique perspective, Chinese adjectives. By training word representations with different models, the gender bias behind the vectors of adjectives is assessed. Through a comparison between the produced results and a human scored data set, we demonstrate how gender bias encoded in word embeddings differentiates from people's attitudes.

Cite

CITATION STYLE

APA

Jiao, M., & Luo, Z. (2021). Gender Bias Hidden behind ChineseWord Embeddings: The Case of Chinese Adjectives. In GeBNLP 2021 - 3rd Workshop on Gender Bias in Natural Language Processing, Proceedings (pp. 8–15). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.gebnlp-1.2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free