Zero-Shot Information Extraction as a Unified Text-to-Triple Translation

25Citations
Citations of this article
131Readers
Mendeley users who have this article in their library.

Abstract

We cast a suite of information extraction tasks into a text-to-triple translation framework. Instead of solving each task relying on task-specific datasets and models, we formalize the task as a translation between task-specific input text and output triples. By taking the task-specific input, we enable a task-agnostic translation by leveraging the latent knowledge that a pre-trained language model has about the task. We further demonstrate that a simple pretraining task of predicting which relational information corresponds to which input text is an effective way to produce task-specific outputs. This enables the zero-shot transfer of our framework to downstream tasks. We study the zero-shot performance of this framework on open information extraction (OIE2016, NYT, WEB, PENN), relation classification (FewRel and TACRED), and factual probe (Google-RE and T-REx). The model transfers non-trivially to most tasks and is often competitive with a fully supervised method without the need for any task-specific training. For instance, we significantly outperform the F1 score of the supervised open information extraction without needing to use its training set.

Cite

CITATION STYLE

APA

Wang, C., Liu, X., Chen, Z., Hong, H., Tang, J., & Song, D. (2021). Zero-Shot Information Extraction as a Unified Text-to-Triple Translation. In EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 1225–1238). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.emnlp-main.94

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free