Trigger warning: This paper contains examples of stereotypes and other harms that could be offensive and triggering to individuals. Language representations are efficient tools used across NLP applications, but they are strife with encoded societal biases. These biases are studied extensively, but with a primary focus on English language representations and biases common in the context of Western society. In this work, we investigate biases present in Hindi language representations with focuses on caste and religion-associated biases. We demonstrate how biases are unique to specific language representations based on the history and culture of the region they are widely spoken in, and how the same societal bias (such as binary gender-associated biases) is encoded by different words and text spans across languages. The discoveries of our work highlight the necessity of culture awareness and linguistic artifacts when modeling language representations, in order to better understand the encoded biases.
CITATION STYLE
Malik, V., Dev, S., Nishi, A., Peng, N., & Chang, K. W. (2022). Socially Aware Bias Measurements for Hindi Language Representations. In NAACL 2022 - 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference (pp. 1041–1052). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.naacl-main.76
Mendeley helps you to discover research relevant for your work.