Understanding Negative Sampling in Knowledge Graph Embedding

  • Qian J
  • Li G
  • Atkinson K
  • et al.
N/ACitations
Citations of this article
13Readers
Mendeley users who have this article in their library.

Abstract

Knowledge graph embedding (KGE) is to project entities and relations of a knowledge graph (KG) into a low-dimensional vector space, which has made steady progress in recent years. Conventional KGE methods, especially translational distance-based models, are trained through discriminating positive samples from negative ones. Most KGs store only positive samples for space efficiency. Negative sampling thus plays a crucial role in encoding triples of a KG. The quality of generated negative samples has a direct impact on the performance of learnt knowledge representation in a myriad of downstream tasks, such as recommendation, link prediction and node classification. We summarize current negative sampling approaches in KGE into three categories, static distribution-based, dynamic distribution-based and custom cluster-based respectively. Based on this categorization we discuss the most prevalent existing approaches and their characteristics. It is a hope that this review can provide some guidelines for new thoughts about negative sampling in KGE.

Cite

CITATION STYLE

APA

Qian, J., Li, G., Atkinson, K., & Yue, Y. (2021). Understanding Negative Sampling in Knowledge Graph Embedding. International Journal of Artificial Intelligence & Applications, 12(1), 71–81. https://doi.org/10.5121/ijaia.2021.12105

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free