Abstract
Named Entity Recognition (NER) for low-resource languages is a both practical and challenging research problem. This paper addresses zero-shot transfer for cross-lingual NER, especially when the amount of source-language training data is also limited. The paper first proposes a simple but effective labeled sequence translation method to translate source-language training data to target languages and avoids problems such as word order change and entity span determination. With the source-language data as well as the translated data, a generation-based multilingual data augmentation method is introduced to further increase diversity by generating synthetic labeled data in multiple languages. These augmented data enable the language model based NER models to generalize better with both the language-specific features from the target-language synthetic data and the language-independent features from multilingual synthetic data. An extensive set of experiments were conducted to demonstrate encouraging cross-lingual transfer performance of the new research on a wide variety of target languages.
Cite
CITATION STYLE
Liu, L., Ding, B., Bing, L., Joty, S., Si, L., & Miao, C. (2021). MulDA: A multilingual data augmentation framework for low-resource cross-lingual NER. In ACL-IJCNLP 2021 - 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Proceedings of the Conference (Vol. 1, pp. 5834–5846). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.acl-long.453
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.