Quantifying similarity between relations with fact distribution

5Citations
Citations of this article
149Readers
Mendeley users who have this article in their library.

Abstract

We introduce a conceptually simple and effective method to quantify the similarity between relations in knowledge bases. Specifically, our approach is based on the divergence between the conditional probability distributions over entity pairs. In this paper, these distributions are parameterized by a very simple neural network. Although computing the exact similarity is intractable, we provide a sampling-based method to get a good approximation. We empirically show the outputs of our approach significantly correlate with human judgments. By applying our method to various tasks, we also find that (1) our approach could effectively detect redundant relations extracted by open information extraction (Open IE) models, that (2) even the most competitive models for relational classification still make mistakes among very similar relations, and that (3) our approach could be incorporated into negative sampling and softmax classification to alleviate these mistakes.

Cite

CITATION STYLE

APA

Chen, W., Zhu, H., Han, X., Liu, Z., & Sun, M. (2020). Quantifying similarity between relations with fact distribution. In ACL 2019 - 57th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (pp. 2882–2894). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/p19-1278

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free