In this paper, we investigate ensemble methods for fine-tuning transformer-based pretrained models for clinical natural language processing tasks, specifically temporal relation extraction from the clinical narrative. Our experimental results on the THYME data show that ensembling as a fine-tuning strategy can further boost model performance over single learners optimized for hyperparameters. Dynamic snapshot ensembling is particularly beneficial as it fine-tunes a wide array of parameters and results in a 2.8% absolute improvement in F1 over the base single learner.
CITATION STYLE
Wang, L., Miller, T., Bethard, S., & Savova, G. (2022). Ensemble-based Fine-Tuning Strategy for Temporal Relation Extraction from the Clinical Narrative. In ClinicalNLP 2022 - 4th Workshop on Clinical Natural Language Processing, Proceedings (pp. 103–108). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.clinicalnlp-1.11
Mendeley helps you to discover research relevant for your work.