Global relation embedding for relation extraction

21Citations
Citations of this article
197Readers
Mendeley users who have this article in their library.

Abstract

We study the problem of textual relation embedding with distant supervision. To combat the wrong labeling problem of distant supervision, we propose to embed textual relations with global statistics of relations, i.e., the cooccurrence statistics of textual and knowledge base relations collected from the entire corpus. This approach turns out to be more robust to the training noise introduced by distant supervision. On a popular relation extraction dataset, we show that the learned textual relation embedding can be used to augment existing relation extraction models and significantly improve their performance. Most remarkably, for the top 1,000 relational facts discovered by the best existing model, the precision can be improved from 83.9% to 89.3%.

Cite

CITATION STYLE

APA

Su, Y., Liu, H., Yavuz, S., Gür, I., Sun, H., & Yan, X. (2018). Global relation embedding for relation extraction. In NAACL HLT 2018 - 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - Proceedings of the Conference (Vol. 1, pp. 820–830). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/n18-1075

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free