Prompt Discriminative Language Models for Domain Adaptation

3Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

Prompt tuning offers an efficient approach to domain adaptation for pretrained language models, which predominantly focus on masked language modeling or generative objectives. However, the potential of discriminative language models in biomedical tasks remains underexplored. To bridge this gap, we develop BIODLM, a method tailored for biomedical domain adaptation of discriminative language models that incorporates prompt-based continual pretraining and prompt tuning for downstream tasks. BIODLM aims to maximize the potential of discriminative language models in low-resource scenarios by reformulating these tasks as span-level corruption detection, thereby enhancing performance on domainspecific tasks and improving the efficiency of continual pertaining. In this way, BIODLM provides a data-efficient domain adaptation method for discriminative language models, effectively enhancing performance on discriminative tasks within the biomedical domain.

Cite

CITATION STYLE

APA

Lu, K., Potash, P., Lin, X., Sun, Y., Qian, Z., Yuan, Z., … Lu, J. (2023). Prompt Discriminative Language Models for Domain Adaptation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 247–258). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.clinicalnlp-1.30

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free