Abstract
This paper describes a system that we built to participate in the SemEval 2022 Task 11: MultiCoNER Multilingual Complex Named Entity Recognition, specifically the track Mono-lingual in English. To construct this system, we used Pre-trained Language Models (PLMs). Especially, the Pre-trained Model base on BERT is applied for the task of recognizing named entities by fine-tuning method. We performed the evaluation on two test datasets of the shared task: the Practice Phase and the Evaluation Phase of the competition.
Cite
CITATION STYLE
Nguyen, D. T., & Huynh, H. K. N. (2022). DANGNT-SGU at SemEval-2022 Task 11: Using Pre-trained Language Model for Complex Named Entity Recognition. In SemEval 2022 - 16th International Workshop on Semantic Evaluation, Proceedings of the Workshop (pp. 1483–1487). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.semeval-1.203
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.