Ensemble-based Fine-Tuning Strategy for Temporal Relation Extraction from the Clinical Narrative

4Citations
Citations of this article
28Readers
Mendeley users who have this article in their library.

Abstract

In this paper, we investigate ensemble methods for fine-tuning transformer-based pretrained models for clinical natural language processing tasks, specifically temporal relation extraction from the clinical narrative. Our experimental results on the THYME data show that ensembling as a fine-tuning strategy can further boost model performance over single learners optimized for hyperparameters. Dynamic snapshot ensembling is particularly beneficial as it fine-tunes a wide array of parameters and results in a 2.8% absolute improvement in F1 over the base single learner.

Cite

CITATION STYLE

APA

Wang, L., Miller, T., Bethard, S., & Savova, G. (2022). Ensemble-based Fine-Tuning Strategy for Temporal Relation Extraction from the Clinical Narrative. In ClinicalNLP 2022 - 4th Workshop on Clinical Natural Language Processing, Proceedings (pp. 103–108). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.clinicalnlp-1.11

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free