ELiRF-VRAIN at BioNLP Task 1B: Radiology Report Summarization

0Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper presents our system at the Radiology Report Summarization Shared Task-1B of the 22nd BioNLP Workshop 2023. Inspired by the work of the BioBART model, we continuously pre-trained a general domain BART model with biomedical data to adapt it to this specific domain. In the pre-training phase, several pre-training tasks are aggregated to inject linguistic knowledge and increase the abstractivity of the generated summaries. We present the results of our models, and also, we have carried out an additional study on the lengths of the generated summaries, which has provided us with interesting information.

Cite

CITATION STYLE

APA

Ahuir, V., Segarra, E., & Hurtado, L. F. (2023). ELiRF-VRAIN at BioNLP Task 1B: Radiology Report Summarization. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 524–529). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.bionlp-1.52

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free