A Unified Knowledge Graph Augmentation Service for Boosting Domain-specific NLP Tasks

1Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.

Abstract

By focusing the pre-training process on domain-specific corpora, some domain-specific pre-trained language models (PLMs) have achieved state-of-the-art results. However, it is under-investigated to design a unified paradigm to inject domain knowledge in the PLM fine-tuning stage. We propose KnowledgeDA, a unified domain language model development service to enhance the task-specific training procedure with domain knowledge graphs. Given domain-specific task texts input, KnowledgeDA can automatically generate a domain-specific language model following three steps: (i) localize domain knowledge entities in texts via an embedding-similarity approach; (ii) generate augmented samples by retrieving replaceable domain entity pairs from two views of both knowledge graph and training data; (iii) select high-quality augmented samples for fine-tuning via confidence-based assessment. We implement a prototype of KnowledgeDA to learn language models for two domains, healthcare and software development. Experiments on domain-specific text classification and QA tasks verify the effectiveness and generalizability of KnowledgeDA.

Cite

CITATION STYLE

APA

Ding, R., Han, X., & Wang, L. (2023). A Unified Knowledge Graph Augmentation Service for Boosting Domain-specific NLP Tasks. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 353–369). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-acl.24

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free