Abstract
Data sharing restrictions are common in natural language processing datasets. The aim of this study is to develop a model that is trained in a source domain to make predictions for a target domain with respect to domain data. To address this problem, the organizers provided models that fine-tuned a large number of source domain data on pre-trained models and dev data for participants. However, source domain data were not distributed. This paper describes the model provided for the name entity recognition task and ways to develop the model. Because little data are provided, pre-trained models are suitable for solving cross-domain tasks. The models fine-tuned by a large number of other domains could be effective in the new domain because the task did not change. The code of this paper is available at https://github.com/windforfurture/ SemEval-2021-Task10.
Cite
CITATION STYLE
Yu, Z., Wang, J., & Zhang, X. (2021). YNU-HPCC at SemEval-2021 Task 10: Using a Transformer-based Source-Free Domain Adaptation Model for Semantic Processing. In SemEval 2021 - 15th International Workshop on Semantic Evaluation, Proceedings of the Workshop (pp. 1289–1294). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.semeval-1.184
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.