Care4Lang at MEDIQA-Chat 2023: Fine-tuning Language Models for Classifying and Summarizing Clinical Dialogues

3Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

Summarizing medical conversations is one of the tasks proposed by MEDIQA-Chat to promote research on automatic clinical note generation from doctor-patient conversations. In this paper, we present our submission to this task using fine-tuned language models, including T5, BART and BioGPT models. The finetuned models are evaluated using ensemble metrics including ROUGE, BERTScore and BLEURT. Among the fine-tuned models, FlanT5 achieved the highest aggregated score for dialogue summarization.

Cite

CITATION STYLE

APA

Alqahtani, A., Salama, R., Diab, M., & Youssef, A. (2023). Care4Lang at MEDIQA-Chat 2023: Fine-tuning Language Models for Classifying and Summarizing Clinical Dialogues. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 524–528). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.clinicalnlp-1.55

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free