Recent developments in Neural Relation Extraction (NRE) have made significant strides towards automated knowledge base construction. While much attention has been dedicated towards improvements in accuracy, there have been no attempts in the literature to evaluate social biases exhibited in NRE systems. In this paper, we create WikiGenderBias, a distantly supervised dataset composed of over 45,000 sentences including a 10% human annotated test set for the purpose of analyzing gender bias in relation extraction systems. We find that when extracting spouse and hypernym (i.e., occupation) relations, an NRE system performs differently when the gender of the target entity is different. However, such disparity does not appear when extracting relations such as birth date or birth place. We also analyze two existing bias mitigation techniques, word embedding debiasing and data augmentation. Unfortunately, due to NRE models relying heavily on surface level cues, we find that existing bias mitigation approaches have a negative effect on NRE. Our analysis lays groundwork for future quantifying and mitigating bias in relation extraction.
CITATION STYLE
Gaut, A., Sun, T., Tang, S., Huang, Y., Qian, J., ElSherief, M., … Wang, W. Y. (2020). Towards understanding gender bias in neural relation extraction. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 2943–2953). Association for Computational Linguistics (ACL).
Mendeley helps you to discover research relevant for your work.