Language Model Priming for Cross-Lingual Event Extraction

17Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

We present a novel, language-agnostic approach to "priming"language models for the task of event extraction, providing particularly effective performance in low-resource and zeroshot cross-lingual settings. With priming, we augment the input to the transformer stack's language model differently depending on the question(s) being asked of the model at runtime. For instance, if the model is being asked to identify arguments for the trigger protested, we will provide that trigger as part of the input to the language model, allowing it to produce different representations for candidate arguments than when it is asked about arguments for the trigger arrest elsewhere in the same sentence. We show that by enabling the language model to better compensate for the deficits of sparse and noisy training data, our approach improves both trigger and argument detection and classification significantly over the state of the art in a zero-shot cross-lingual setting.

Cite

CITATION STYLE

APA

Fincke, S., Agarwal, S., Miller, S., & Boschee, E. (2022). Language Model Priming for Cross-Lingual Event Extraction. In Proceedings of the 36th AAAI Conference on Artificial Intelligence, AAAI 2022 (Vol. 36, pp. 10627–10635). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v36i10.21307

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free