Learning Prototype Representations across Few-Shot Tasks for Event Detection

24Citations
Citations of this article
59Readers
Mendeley users who have this article in their library.

Abstract

We address the sampling bias and outlier issues in few-shot learning for event detection, a subtask of information extraction. We propose to model the relations between training tasks in episodic few-shot learning by introducing cross-task prototypes. We further propose to enforce prediction consistency among classifiers across tasks to make the model more robust to outliers. Our extensive experiment shows a consistent improvement on three few-shot learning datasets. The findings suggest that our model is more robust when labeled data of novel event types is limited. The source code is available at http://github.com/laiviet/fsl-proact.

Cite

CITATION STYLE

APA

Lai, V. D., Dernoncourt, F., & Nguyen, T. H. (2021). Learning Prototype Representations across Few-Shot Tasks for Event Detection. In EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 5270–5277). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.emnlp-main.427

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free