Target-Oriented Fine-tuning for Zero-Resource Named Entity Recognition

14Citations
Citations of this article
60Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Zero-resource named entity recognition (NER) severely suffers from data scarcity in a specific domain or language. Most studies on zero-resource NER transfer knowledge from various data by fine-tuning on different auxiliary tasks. However, how to properly select training data and fine-tuning tasks is still an open problem. In this paper, we tackle the problem by transferring knowledge from three aspects, i.e., domain, language and task, and strengthening connections among them. Specifically, we propose four practical guidelines to guide knowledge transfer and task fine-tuning. Based on these guidelines, we design a target-oriented fine-tuning (TOF) framework to exploit various data from three aspects in a unified training manner. Experimental results on six benchmarks show that our method yields consistent improvements over baselines in both cross-domain and cross-lingual scenarios. Particularly, we achieve new state-of-the-art performance on five benchmarks.

Cite

CITATION STYLE

APA

Zhang, Y., Meng, F., Chen, Y., Xu, J., & Zhou, J. (2021). Target-Oriented Fine-tuning for Zero-Resource Named Entity Recognition. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (pp. 1603–1615). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-acl.140

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free