Discriminative pre-trained language models, such as ELECTRA, have achieved promising performances in a variety of general tasks. However, these generic pre-trained models struggle to capture domain-specific knowledge of domain-related tasks. In this work, we propose a novel domain-adaptation method for ELECTRA, which can dynamically select domain-specific tokens and guide the discriminator to emphasize them, without introducing new training parameters. We show that by re-weighting the losses of domain-specific tokens, ELECTRA can be effectively adapted to different domains. The experimental results in both computer science and biomedical domains show that the proposed method can achieve state-of-the-art results on the domain-related tasks.
CITATION STYLE
Cheng, D., Huang, S., Liu, J., Zhan, Y., Sun, H., Wei, F., … Zhang, Q. (2022). Snapshot-Guided Domain Adaptation for ELECTRA. In Findings of the Association for Computational Linguistics: EMNLP 2022 (pp. 2226–2232). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-emnlp.163
Mendeley helps you to discover research relevant for your work.