ClinicalT5: A Generative Language Model for Clinical Text

46Citations
Citations of this article
75Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In the past few years, large pre-trained language models (PLMs) have been widely adopted in different areas and have made fundamental improvements over a variety of downstream tasks in natural language processing (NLP). Meanwhile, domain-specific variants of PLMs are being proposed to address the needs of domains that demonstrate a specific pattern of writing and vocabulary, e.g., BioBERT for the biomedical domain and ClinicalBERT for the clinical domain. Recently, generative language models like BART and T5 are gaining popularity with their competitive performance on text generation as well as on tasks cast as generative problems. However, in the clinical domain, such domain-specific generative variants are still underexplored. To address this need, our work introduces a T5-based text-to-text transformer model pre-trained on clinical text, i.e., ClinicalT5. We evaluate the proposed model both intrinsically and extrinsically over a diverse set of tasks across multiple datasets, and show that ClinicalT5 dramatically outperforms T5 in the domain-specific tasks and compares favorably with its close baselines.

Cite

CITATION STYLE

APA

Lu, Q., Dou, D., & Nguyen, T. H. (2022). ClinicalT5: A Generative Language Model for Clinical Text. In Findings of the Association for Computational Linguistics: EMNLP 2022 (pp. 5465–5472). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-emnlp.408

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free