MDC at SemEval-2023 Task 7: Fine-tuning transformers for textual entailment prediction and evidence retrieval in clinical trials

1Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

Abstract

We present our entry to the Multi-evidence Natural Language Inference for Clinical Trial Data task at SemEval 2023. We submitted entries for both the evidence retrieval and textual entailment sub-tasks. For the evidence retrieval task, we fine-tuned the PubMedBERT transformer model to extract relevant evidence from clinical trial data given a hypothesis concerning either a single clinical trial or pair of clinical trials. Our best performing model achieved an F1 score of 0.804. For the textual entailment task, in which systems had to predict whether a hypothesis about either a single clinical trial or pair of clinical trials is true or false, we fine-tuned the BioLinkBERT transformer model. We passed our evidence retrieval model’s output into our textual entailment model and submitted its output for the evaluation. Our best performing model achieved an F1 score of 0.695.

Cite

CITATION STYLE

APA

Bevan, R., Turbitt, O., & Aboshokor, M. (2023). MDC at SemEval-2023 Task 7: Fine-tuning transformers for textual entailment prediction and evidence retrieval in clinical trials. In 17th International Workshop on Semantic Evaluation, SemEval 2023 - Proceedings of the Workshop (pp. 1287–1292). Association for Computational Linguistics. https://doi.org/10.18653/v1/2023.semeval-1.179

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free