Semi-supervised Relation Extraction via Incremental Meta Self-Training

53Citations
Citations of this article
83Readers
Mendeley users who have this article in their library.

Abstract

To alleviate human efforts from obtaining large-scale annotations, Semi-Supervised Relation Extraction methods aim to leverage unlabeled data in addition to learning from limited samples. Existing self-training methods suffer from the gradual drift problem, where noisy pseudo labels on unlabeled data are incorporated during training. To alleviate the noise in pseudo labels, we propose a method called MetaSRE, where a Relation Label Generation Network generates quality assessment on pseudo labels by (meta) learning from the successful and failed attempts on Relation Classification Network as an additional metaobjective. To reduce the influence of noisy pseudo labels, MetaSRE adopts a pseudo label selection and exploitation scheme which assesses pseudo label quality on unlabeled samples and only exploits high-quality pseudo labels in a self-training fashion to incrementally augment labeled samples for both robustness and accuracy. Experimental results on two public datasets demonstrate the effectiveness of the proposed approach. Source code is available.

Cite

CITATION STYLE

APA

Hu, X., Zhang, C., Ma, F., Liu, C., Wen, L., & Yu, P. S. (2021). Semi-supervised Relation Extraction via Incremental Meta Self-Training. In Findings of the Association for Computational Linguistics, Findings of ACL: EMNLP 2021 (pp. 487–496). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-emnlp.44

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free