Existing models for cross-domain named entity recognition (NER) rely on numerous unlabeled corpus or labeled NER training data in target domains. However, collecting data for low-resource target domains is not only expensive but also time-consuming. Hence, we propose a cross-domain NER model that does not use any external resources. We first introduce a Multi-Task Learning (MTL) by adding a new objective function to detect whether tokens are named entities or not. We then introduce a framework called Mixture of Entity Experts (MoEE) to improve the robustness for zero-resource domain adaptation. Finally, experimental results show that our model outperforms strong unsupervised cross-domain sequence labeling models, and the performance of our model is close to that of the state-of-the-art model which leverages extensive resources.
CITATION STYLE
Liu, Z., Winata, G. I., & Fung, P. (2020). Zero-resource cross-domain named entity recognition. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 1–6). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.repl4nlp-1.1
Mendeley helps you to discover research relevant for your work.