Model-Agnostic Bias Measurement in Link Prediction

0Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.

Abstract

Link prediction models based on factual knowledge graphs are commonly used in applications such as search and question answering. However, work investigating social bias in these models has been limited. Previous work focused on knowledge graph embeddings, so more recent classes of models achieving superior results by fine-tuning Transformers have not yet been investigated. We therefore present a model-agnostic approach for bias measurement leveraging fairness metrics to compare bias in knowledge graph embedding-based predictions (KG only) with models that use pre-trained, Transformer-based language models (KG+LM). We further create a dataset to measure gender bias in occupation predictions and assess whether the KG+LM models are more or less biased than KG only models. We find that gender bias tends to be higher for the KG+LM models and analyze potential connections to the accuracy of the models and the data bias inherent in our dataset. Finally, we discuss limitations and ethical considerations of our work. The repository containing the source code and the data set is publicly available at https://github.com/lena-schwert/comparing-bias-in-KG-models.

Cite

CITATION STYLE

APA

Schwertmann, L., Ravi, M. P. K., & de Melo, G. (2023). Model-Agnostic Bias Measurement in Link Prediction. In EACL 2023 - 17th Conference of the European Chapter of the Association for Computational Linguistics, Findings of EACL 2023 (pp. 1587–1603). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-eacl.121

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free