Adversarial Attacks on Knowledge Graph Embeddings via Instance Attribution Methods

13Citations
Citations of this article
64Readers
Mendeley users who have this article in their library.

Abstract

Despite the widespread use of Knowledge Graph Embeddings (KGE), little is known about the security vulnerabilities that might disrupt their intended behaviour. We study data poisoning attacks against KGE models for link prediction. These attacks craft adversarial additions or deletions at training time to cause model failure at test time. To select adversarial deletions, we propose to use the model-agnostic instance attribution methods from Interpretable Machine Learning, which identify the training instances that are most influential to a neural model's predictions on test instances. We use these influential triples as adversarial deletions. We further propose a heuristic method to replace one of the two entities in each influential triple to generate adversarial additions. Our experiments show that the proposed strategies outperform the state-of-art data poisoning attacks on KGE models and improve the MRR degradation due to the attacks by up to 62% over the baselines.

Cite

CITATION STYLE

APA

Bhardwaj, P., Kelleher, J., Costabello, L., & O’Sullivan, D. (2021). Adversarial Attacks on Knowledge Graph Embeddings via Instance Attribution Methods. In EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 8225–8239). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.emnlp-main.648

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free