YNUNLP at SemEval-2023 Task 2:The Pseudo Twin Tower Pre-training Model for Chinese Named Entity Recognition

2Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

Abstract

This paper introduces our method of developing a system for SemEval 2023 Task 2: MultiCoNER II Multilingual Complex Named Entity Recognition, Track 9-Chinese. In this task, we need to identify entity boundaries and category labels for the six identified categories. The focus of this task is to detect fine-grained named entities whose data set has a fine-grained taxonomy of 36 NE classes, representing a realistic challenge for NER. We use BERT embedding to represent each character in the original sentence and train CRF-Rdrop to predict named entity categories using the data set provided by the organizer. Our best submission, with a macro average f1 score of 0.5657, ranked 15th out of 22 teams.

Cite

CITATION STYLE

APA

Li, J., & Zhou, X. (2023). YNUNLP at SemEval-2023 Task 2:The Pseudo Twin Tower Pre-training Model for Chinese Named Entity Recognition. In 17th International Workshop on Semantic Evaluation, SemEval 2023 - Proceedings of the Workshop (pp. 1619–1624). Association for Computational Linguistics. https://doi.org/10.18653/v1/2023.semeval-1.224

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free