Evaluating Pretrained Transformer-based Models on the Task of Fine-Grained Named Entity Recognition

26Citations
Citations of this article
93Readers
Mendeley users who have this article in their library.

Abstract

Named Entity Recognition (NER) is a fundamental Natural Language Processing (NLP) task and has remained an active research field. In recent years, transformer models and more specifically the BERT model developed at Google revolutionised the field of NLP. While the performance of transformer-based approaches such as BERT has been studied for NER, there has not yet been a study for the fine-grained Named Entity Recognition (FG-NER) task. In this paper, we compare three transformer-based models (BERT, RoBERTa, and XLNet) to two non-transformer-based models (CRF and BiLSTM-CNN-CRF). Furthermore, we apply each model to a multitude of distinct domains. We find that transformer-based models incrementally outperform the studied non-transformer-based models in most domains with respect to the F1 score. Furthermore, we find that the choice of domain significantly influenced the performance regardless of the respective data size or the model chosen.

Cite

CITATION STYLE

APA

Lothritz, C., Allix, K., Veiber, L., Bissyandé, T. F., & Klein, J. (2020). Evaluating Pretrained Transformer-based Models on the Task of Fine-Grained Named Entity Recognition. In COLING 2020 - 28th International Conference on Computational Linguistics, Proceedings of the Conference (pp. 3750–3760). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.coling-main.334

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free