Unsupervised relation extraction from language models using constrained cloze completion

7Citations
Citations of this article
76Readers
Mendeley users who have this article in their library.

Abstract

We show that state-of-the-art self-supervised language models can be readily used to extract relations from a corpus without the need to train a fine-tuned extractive head. We introduce RE-Flex, a simple framework that performs constrained cloze completion over pretrained language models to perform unsupervised relation extraction. RE-Flex uses contextual matching to ensure that language model predictions matches supporting evidence from the input corpus that is relevant to a target relation. We perform an extensive experimental study over multiple relation extraction benchmarks and demonstrate that RE-Flex outperforms competing unsupervised relation extraction methods based on pretrained language models by up to 27.8 F1 points compared to the next-best method. Our results show that constrained inference queries against a language model can enable accurate unsupervised relation extraction.

Cite

CITATION STYLE

APA

Goswami, A., Bhat, A., Ohana, H., & Rekatsinas, T. (2020). Unsupervised relation extraction from language models using constrained cloze completion. In Findings of the Association for Computational Linguistics Findings of ACL: EMNLP 2020 (pp. 1263–1276). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.findings-emnlp.113

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free