Sequence tagging with contextual and non-contextual subword representations: A multilingual evaluation

34Citations
Citations of this article
175Readers
Mendeley users who have this article in their library.

Abstract

Pretrained contextual and non-contextual subword embeddings have become available in over 250 languages, allowing massively multilingual NLP. However, while there is no dearth of pretrained embeddings, the distinct lack of systematic evaluations makes it difficult for practitioners to choose between them. In this work, we conduct an extensive evaluation comparing non-contextual subword embeddings, namely FastText and BPEmb, and a contextual representation method, namely BERT, on multilingual named entity recognition and part-of-speech tagging. We find that overall, a combination of BERT, BPEmb, and character representations works well across languages and tasks. A more detailed analysis reveals different strengths and weaknesses: Multilingual BERT performs well in medium- to high-resource languages, but is outperformed by non-contextual subword embeddings in a low-resource setting.

Cite

CITATION STYLE

APA

Heinzerling, B., & Strube, M. (2020). Sequence tagging with contextual and non-contextual subword representations: A multilingual evaluation. In ACL 2019 - 57th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (pp. 273–291). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/p19-1027

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free