Exploring Text Representations for Generative Temporal Relation Extraction

2Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.

Abstract

Sequence-to-sequence models are appealing because they allow both encoder and decoder to be shared across many tasks by formulating those tasks as text-to-text problems. Despite recently reported successes of such models, we find that engineering input/output representations for such text-to-text models is challenging. On the Clinical TempEval 2016 relation extraction task, the most natural choice of output representations, where relations are spelled out in simple predicate logic statements, did not lead to good performance. We explore a variety of input/output representations, with the most successful prompting one event at a time, and achieving results competitive with standard pairwise temporal relation extraction systems.

Cite

CITATION STYLE

APA

Dligach, D., Miller, T., Bethard, S., & Savova, G. (2022). Exploring Text Representations for Generative Temporal Relation Extraction. In ClinicalNLP 2022 - 4th Workshop on Clinical Natural Language Processing, Proceedings (pp. 109–113). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.clinicalnlp-1.12

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free